Test Report: Docker_Linux_crio 21683

                    
                      ec1ad263eb9d75fb579dc5b6c2680f618af3e384:2025-10-09:41836
                    
                

Test fail (56/166)

Order failed test Duration
27 TestAddons/Setup 514.48
38 TestErrorSpam/setup 498.44
47 TestFunctional/serial/StartWithProxy 499.59
49 TestFunctional/serial/SoftStart 366.78
51 TestFunctional/serial/KubectlGetPods 2.16
61 TestFunctional/serial/MinikubeKubectlCmd 2.15
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.17
63 TestFunctional/serial/ExtraConfig 737.05
64 TestFunctional/serial/ComponentHealth 2
67 TestFunctional/serial/InvalidService 0.05
70 TestFunctional/parallel/DashboardCmd 1.89
73 TestFunctional/parallel/StatusCmd 2.72
77 TestFunctional/parallel/ServiceCmdConnect 1.58
79 TestFunctional/parallel/PersistentVolumeClaim 241.59
83 TestFunctional/parallel/MySQL 1.47
89 TestFunctional/parallel/NodeLabels 1.33
94 TestFunctional/parallel/ServiceCmd/DeployApp 0.06
95 TestFunctional/parallel/ServiceCmd/List 0.32
96 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
97 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
98 TestFunctional/parallel/ServiceCmd/Format 0.35
99 TestFunctional/parallel/ServiceCmd/URL 0.31
101 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.34
104 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0.07
105 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 98.73
117 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.95
120 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.98
122 TestFunctional/parallel/MountCmd/any-port 2.35
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.64
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
127 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
128 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
141 TestMultiControlPlane/serial/StartCluster 501.26
142 TestMultiControlPlane/serial/DeployApp 91.83
143 TestMultiControlPlane/serial/PingHostFromPods 1.42
144 TestMultiControlPlane/serial/AddWorkerNode 1.56
145 TestMultiControlPlane/serial/NodeLabels 1.37
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.64
147 TestMultiControlPlane/serial/CopyFile 1.62
148 TestMultiControlPlane/serial/StopSecondaryNode 1.72
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.66
150 TestMultiControlPlane/serial/RestartSecondaryNode 48.13
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.65
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 370.15
153 TestMultiControlPlane/serial/DeleteSecondaryNode 1.86
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.61
155 TestMultiControlPlane/serial/StopCluster 1.38
156 TestMultiControlPlane/serial/RestartCluster 368.5
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.67
158 TestMultiControlPlane/serial/AddSecondaryNode 1.6
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.65
163 TestJSONOutput/start/Command 498.05
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestMinikubeProfile 504.9
221 TestMultiNode/serial/ValidateNameConflict 7200.06
x
+
TestAddons/Setup (514.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-139298 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-139298 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (8m34.443995126s)

                                                
                                                
-- stdout --
	* [addons-139298] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "addons-139298" primary control-plane node in "addons-139298" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:39:29.416317  142849 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:39:29.416570  142849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:39:29.416579  142849 out.go:374] Setting ErrFile to fd 2...
	I1009 18:39:29.416583  142849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:39:29.416799  142849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 18:39:29.417333  142849 out.go:368] Setting JSON to false
	I1009 18:39:29.418260  142849 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1318,"bootTime":1760033851,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:39:29.418358  142849 start.go:143] virtualization: kvm guest
	I1009 18:39:29.420525  142849 out.go:179] * [addons-139298] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:39:29.421984  142849 notify.go:221] Checking for updates...
	I1009 18:39:29.422026  142849 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 18:39:29.423407  142849 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:39:29.424940  142849 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 18:39:29.426449  142849 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 18:39:29.427922  142849 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:39:29.429301  142849 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:39:29.430873  142849 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 18:39:29.454978  142849 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:39:29.455071  142849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:39:29.518195  142849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-09 18:39:29.507304502 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:39:29.518296  142849 docker.go:319] overlay module found
	I1009 18:39:29.520102  142849 out.go:179] * Using the docker driver based on user configuration
	I1009 18:39:29.521434  142849 start.go:309] selected driver: docker
	I1009 18:39:29.521453  142849 start.go:930] validating driver "docker" against <nil>
	I1009 18:39:29.521465  142849 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:39:29.522156  142849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:39:29.586679  142849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-09 18:39:29.57655356 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:39:29.586836  142849 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 18:39:29.587043  142849 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:39:29.588866  142849 out.go:179] * Using Docker driver with root privileges
	I1009 18:39:29.590163  142849 cni.go:84] Creating CNI manager for ""
	I1009 18:39:29.590212  142849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:39:29.590224  142849 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:39:29.590297  142849 start.go:353] cluster config:
	{Name:addons-139298 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-139298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1009 18:39:29.592015  142849 out.go:179] * Starting "addons-139298" primary control-plane node in "addons-139298" cluster
	I1009 18:39:29.593397  142849 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 18:39:29.594829  142849 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:39:29.596121  142849 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:39:29.596154  142849 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:39:29.596162  142849 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:39:29.596171  142849 cache.go:58] Caching tarball of preloaded images
	I1009 18:39:29.596257  142849 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:39:29.596267  142849 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:39:29.596570  142849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/config.json ...
	I1009 18:39:29.596601  142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/config.json: {Name:mk74c72bc049148ef11108d8a71c51887cf15c22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:29.612903  142849 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 18:39:29.613024  142849 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1009 18:39:29.613041  142849 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1009 18:39:29.613045  142849 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1009 18:39:29.613055  142849 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1009 18:39:29.613062  142849 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
	I1009 18:39:42.574172  142849 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
	I1009 18:39:42.574235  142849 cache.go:232] Successfully downloaded all kic artifacts
	I1009 18:39:42.574286  142849 start.go:361] acquireMachinesLock for addons-139298: {Name:mkaa7e9ae30ef19808b4315a06326fba69a900ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:39:42.575033  142849 start.go:365] duration metric: took 710.33µs to acquireMachinesLock for "addons-139298"
	I1009 18:39:42.575077  142849 start.go:94] Provisioning new machine with config: &{Name:addons-139298 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-139298 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:39:42.575154  142849 start.go:126] createHost starting for "" (driver="docker")
	I1009 18:39:42.655739  142849 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1009 18:39:42.656037  142849 start.go:160] libmachine.API.Create for "addons-139298" (driver="docker")
	I1009 18:39:42.656070  142849 client.go:168] LocalClient.Create starting
	I1009 18:39:42.656193  142849 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
	I1009 18:39:42.734530  142849 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
	I1009 18:39:42.944837  142849 cli_runner.go:164] Run: docker network inspect addons-139298 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:39:42.962609  142849 cli_runner.go:211] docker network inspect addons-139298 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:39:42.962703  142849 network_create.go:284] running [docker network inspect addons-139298] to gather additional debugging logs...
	I1009 18:39:42.962724  142849 cli_runner.go:164] Run: docker network inspect addons-139298
	W1009 18:39:42.979456  142849 cli_runner.go:211] docker network inspect addons-139298 returned with exit code 1
	I1009 18:39:42.979493  142849 network_create.go:287] error running [docker network inspect addons-139298]: docker network inspect addons-139298: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-139298 not found
	I1009 18:39:42.979506  142849 network_create.go:289] output of [docker network inspect addons-139298]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-139298 not found
	
	** /stderr **
	I1009 18:39:42.979586  142849 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:39:42.996840  142849 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019a2f30}
	I1009 18:39:42.996878  142849 network_create.go:124] attempt to create docker network addons-139298 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:39:42.996925  142849 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-139298 addons-139298
	I1009 18:39:43.055145  142849 network_create.go:108] docker network addons-139298 192.168.49.0/24 created
	I1009 18:39:43.055177  142849 kic.go:121] calculated static IP "192.168.49.2" for the "addons-139298" container
	I1009 18:39:43.055257  142849 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:39:43.071774  142849 cli_runner.go:164] Run: docker volume create addons-139298 --label name.minikube.sigs.k8s.io=addons-139298 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:39:43.094360  142849 oci.go:103] Successfully created a docker volume addons-139298
	I1009 18:39:43.094473  142849 cli_runner.go:164] Run: docker run --rm --name addons-139298-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-139298 --entrypoint /usr/bin/test -v addons-139298:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:39:45.353646  142849 cli_runner.go:217] Completed: docker run --rm --name addons-139298-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-139298 --entrypoint /usr/bin/test -v addons-139298:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (2.259123555s)
	I1009 18:39:45.353706  142849 oci.go:107] Successfully prepared a docker volume addons-139298
	I1009 18:39:45.353745  142849 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:39:45.353775  142849 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:39:45.353837  142849 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-139298:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:39:49.753281  142849 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-139298:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.39939666s)
	I1009 18:39:49.753312  142849 kic.go:203] duration metric: took 4.39953526s to extract preloaded images to volume ...
	W1009 18:39:49.753422  142849 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:39:49.753464  142849 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:39:49.753514  142849 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:39:49.812917  142849 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-139298 --name addons-139298 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-139298 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-139298 --network addons-139298 --ip 192.168.49.2 --volume addons-139298:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:39:50.105989  142849 cli_runner.go:164] Run: docker container inspect addons-139298 --format={{.State.Running}}
	I1009 18:39:50.125263  142849 cli_runner.go:164] Run: docker container inspect addons-139298 --format={{.State.Status}}
	I1009 18:39:50.144553  142849 cli_runner.go:164] Run: docker exec addons-139298 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:39:50.194428  142849 oci.go:144] the created container "addons-139298" has a running status.
	I1009 18:39:50.194461  142849 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/addons-139298/id_rsa...
	I1009 18:39:50.429782  142849 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/addons-139298/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:39:50.464319  142849 cli_runner.go:164] Run: docker container inspect addons-139298 --format={{.State.Status}}
	I1009 18:39:50.483524  142849 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:39:50.483550  142849 kic_runner.go:114] Args: [docker exec --privileged addons-139298 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:39:50.531004  142849 cli_runner.go:164] Run: docker container inspect addons-139298 --format={{.State.Status}}
	I1009 18:39:50.550668  142849 machine.go:93] provisionDockerMachine start ...
	I1009 18:39:50.550784  142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
	I1009 18:39:50.570460  142849 main.go:141] libmachine: Using SSH client type: native
	I1009 18:39:50.570685  142849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 18:39:50.570697  142849 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:39:50.721476  142849 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-139298
	
	I1009 18:39:50.721537  142849 ubuntu.go:182] provisioning hostname "addons-139298"
	I1009 18:39:50.721611  142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
	I1009 18:39:50.740695  142849 main.go:141] libmachine: Using SSH client type: native
	I1009 18:39:50.740914  142849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 18:39:50.740928  142849 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-139298 && echo "addons-139298" | sudo tee /etc/hostname
	I1009 18:39:50.899338  142849 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-139298
	
	I1009 18:39:50.899447  142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
	I1009 18:39:50.917112  142849 main.go:141] libmachine: Using SSH client type: native
	I1009 18:39:50.917334  142849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 18:39:50.917351  142849 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-139298' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-139298/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-139298' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:39:51.064444  142849 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:39:51.064475  142849 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 18:39:51.064517  142849 ubuntu.go:190] setting up certificates
	I1009 18:39:51.064536  142849 provision.go:84] configureAuth start
	I1009 18:39:51.064594  142849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-139298
	I1009 18:39:51.082326  142849 provision.go:143] copyHostCerts
	I1009 18:39:51.082417  142849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 18:39:51.082533  142849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 18:39:51.082592  142849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 18:39:51.082644  142849 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.addons-139298 san=[127.0.0.1 192.168.49.2 addons-139298 localhost minikube]
	I1009 18:39:51.345908  142849 provision.go:177] copyRemoteCerts
	I1009 18:39:51.345969  142849 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:39:51.346017  142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
	I1009 18:39:51.364326  142849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/addons-139298/id_rsa Username:docker}
	I1009 18:39:51.469087  142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:39:51.488563  142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 18:39:51.506122  142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:39:51.523714  142849 provision.go:87] duration metric: took 459.158853ms to configureAuth
	I1009 18:39:51.523752  142849 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:39:51.523932  142849 config.go:182] Loaded profile config "addons-139298": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:39:51.524032  142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
	I1009 18:39:51.542486  142849 main.go:141] libmachine: Using SSH client type: native
	I1009 18:39:51.542707  142849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 18:39:51.542725  142849 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:39:51.804333  142849 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:39:51.804361  142849 machine.go:96] duration metric: took 1.253670246s to provisionDockerMachine
	I1009 18:39:51.804371  142849 client.go:171] duration metric: took 9.148295347s to LocalClient.Create
	I1009 18:39:51.804409  142849 start.go:168] duration metric: took 9.148374388s to libmachine.API.Create "addons-139298"
	I1009 18:39:51.804420  142849 start.go:294] postStartSetup for "addons-139298" (driver="docker")
	I1009 18:39:51.804433  142849 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:39:51.804487  142849 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:39:51.804537  142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
	I1009 18:39:51.823166  142849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/addons-139298/id_rsa Username:docker}
	I1009 18:39:51.928444  142849 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:39:51.932029  142849 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:39:51.932058  142849 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:39:51.932073  142849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 18:39:51.932140  142849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 18:39:51.932175  142849 start.go:297] duration metric: took 127.747641ms for postStartSetup
	I1009 18:39:51.932508  142849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-139298
	I1009 18:39:51.950046  142849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/config.json ...
	I1009 18:39:51.950310  142849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:39:51.950351  142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
	I1009 18:39:51.969058  142849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/addons-139298/id_rsa Username:docker}
	I1009 18:39:52.069900  142849 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:39:52.074436  142849 start.go:129] duration metric: took 9.499262716s to createHost
	I1009 18:39:52.074462  142849 start.go:84] releasing machines lock for "addons-139298", held for 9.499405215s
	I1009 18:39:52.074536  142849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-139298
	I1009 18:39:52.091847  142849 ssh_runner.go:195] Run: cat /version.json
	I1009 18:39:52.091879  142849 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:39:52.091894  142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
	I1009 18:39:52.091945  142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
	I1009 18:39:52.110072  142849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/addons-139298/id_rsa Username:docker}
	I1009 18:39:52.110738  142849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/addons-139298/id_rsa Username:docker}
	I1009 18:39:52.263605  142849 ssh_runner.go:195] Run: systemctl --version
	I1009 18:39:52.269960  142849 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:39:52.305416  142849 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:39:52.310236  142849 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:39:52.310297  142849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:39:52.337865  142849 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:39:52.337887  142849 start.go:496] detecting cgroup driver to use...
	I1009 18:39:52.337920  142849 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:39:52.337969  142849 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:39:52.354977  142849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:39:52.368019  142849 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:39:52.368085  142849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:39:52.385214  142849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:39:52.403678  142849 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:39:52.486629  142849 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:39:52.575347  142849 docker.go:234] disabling docker service ...
	I1009 18:39:52.575464  142849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:39:52.595394  142849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:39:52.608287  142849 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:39:52.696154  142849 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:39:52.778545  142849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:39:52.791534  142849 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:39:52.806254  142849 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:39:52.806322  142849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:39:52.817298  142849 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:39:52.817366  142849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:39:52.826395  142849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:39:52.835443  142849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:39:52.844309  142849 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:39:52.852771  142849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:39:52.861654  142849 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:39:52.875723  142849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:39:52.885075  142849 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:39:52.892588  142849 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 18:39:52.892651  142849 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 18:39:52.905905  142849 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:39:52.913851  142849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:39:52.993186  142849 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:39:53.100800  142849 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:39:53.100891  142849 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:39:53.105077  142849 start.go:564] Will wait 60s for crictl version
	I1009 18:39:53.105137  142849 ssh_runner.go:195] Run: which crictl
	I1009 18:39:53.108706  142849 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:39:53.134103  142849 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:39:53.134231  142849 ssh_runner.go:195] Run: crio --version
	I1009 18:39:53.163457  142849 ssh_runner.go:195] Run: crio --version
	I1009 18:39:53.194280  142849 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:39:53.195522  142849 cli_runner.go:164] Run: docker network inspect addons-139298 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:39:53.212147  142849 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:39:53.216312  142849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:39:53.226475  142849 kubeadm.go:883] updating cluster {Name:addons-139298 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-139298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:39:53.226607  142849 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:39:53.226650  142849 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:39:53.259127  142849 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:39:53.259157  142849 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:39:53.259220  142849 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:39:53.285739  142849 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:39:53.285763  142849 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:39:53.285773  142849 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:39:53.285856  142849 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-139298 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-139298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:39:53.285916  142849 ssh_runner.go:195] Run: crio config
	I1009 18:39:53.330789  142849 cni.go:84] Creating CNI manager for ""
	I1009 18:39:53.330808  142849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:39:53.330828  142849 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:39:53.330855  142849 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-139298 NodeName:addons-139298 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:39:53.331019  142849 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-139298"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:39:53.331093  142849 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:39:53.339832  142849 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 18:39:53.339892  142849 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:39:53.348481  142849 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 18:39:53.361395  142849 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:39:53.377002  142849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1009 18:39:53.390030  142849 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:39:53.393824  142849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:39:53.404225  142849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:39:53.487307  142849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:39:53.507940  142849 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298 for IP: 192.168.49.2
	I1009 18:39:53.507969  142849 certs.go:195] generating shared ca certs ...
	I1009 18:39:53.508006  142849 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:53.508941  142849 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 18:39:53.638790  142849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt ...
	I1009 18:39:53.638824  142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt: {Name:mk926486f9e0523ea70fea9163d972006ea77f6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:53.639707  142849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key ...
	I1009 18:39:53.639733  142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key: {Name:mkbb7fdee2e3223ce98cc7eb1427bb63146a4001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:53.639861  142849 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 18:39:53.931315  142849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt ...
	I1009 18:39:53.931351  142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt: {Name:mk8b123b71e93c7266be83f7db2711ce2438ac01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:53.932540  142849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key ...
	I1009 18:39:53.932566  142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key: {Name:mkc9fa7331fb59618160a9960ed0d3f8d4cab034 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:53.932717  142849 certs.go:257] generating profile certs ...
	I1009 18:39:53.932792  142849 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/client.key
	I1009 18:39:53.932808  142849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/client.crt with IP's: []
	I1009 18:39:54.151720  142849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/client.crt ...
	I1009 18:39:54.151757  142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/client.crt: {Name:mk90d6b29686c2412fb39404cfcdbd54eafe5bb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:54.151969  142849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/client.key ...
	I1009 18:39:54.151986  142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/client.key: {Name:mk9a76227a473e66c172f016a0ba484179fde245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:54.152100  142849 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.key.4cc4f899
	I1009 18:39:54.152124  142849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.crt.4cc4f899 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 18:39:54.515706  142849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.crt.4cc4f899 ...
	I1009 18:39:54.515740  142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.crt.4cc4f899: {Name:mkee55d6c031b17a28894fabb0580ae72888c333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:54.515941  142849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.key.4cc4f899 ...
	I1009 18:39:54.515957  142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.key.4cc4f899: {Name:mk920799f3ab74f80e2e3e1063eddc59a20dc5e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:54.516037  142849 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.crt.4cc4f899 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.crt
	I1009 18:39:54.516131  142849 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.key.4cc4f899 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.key
	I1009 18:39:54.516180  142849 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.key
	I1009 18:39:54.516198  142849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.crt with IP's: []
	I1009 18:39:54.632996  142849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.crt ...
	I1009 18:39:54.633030  142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.crt: {Name:mk9fcf902e178624332654eb2c089642aaaec6e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:54.633787  142849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.key ...
	I1009 18:39:54.633807  142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.key: {Name:mk2d7d42f26e04dd7dfaf8057702acf3314ab3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:54.634503  142849 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:39:54.634543  142849 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:39:54.634564  142849 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:39:54.634586  142849 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 18:39:54.635313  142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:39:54.654055  142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:39:54.671373  142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:39:54.688697  142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:39:54.706111  142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 18:39:54.722885  142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:39:54.740033  142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:39:54.757144  142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:39:54.774060  142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:39:54.793234  142849 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:39:54.806165  142849 ssh_runner.go:195] Run: openssl version
	I1009 18:39:54.812416  142849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:39:54.824578  142849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:39:54.828811  142849 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:39:54.828878  142849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:39:54.863742  142849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:39:54.873297  142849 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:39:54.877286  142849 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:39:54.877352  142849 kubeadm.go:400] StartCluster: {Name:addons-139298 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-139298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:39:54.877449  142849 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:39:54.877523  142849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:39:54.906831  142849 cri.go:89] found id: ""
	I1009 18:39:54.906895  142849 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:39:54.915263  142849 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:39:54.923559  142849 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:39:54.923626  142849 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:39:54.931404  142849 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:39:54.931427  142849 kubeadm.go:157] found existing configuration files:
	
	I1009 18:39:54.931467  142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:39:54.938962  142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:39:54.939028  142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:39:54.946419  142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:39:54.954219  142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:39:54.954267  142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:39:54.961617  142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:39:54.968844  142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:39:54.968900  142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:39:54.975926  142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:39:54.983279  142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:39:54.983343  142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:39:54.990468  142849 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:39:55.062184  142849 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:39:55.121091  142849 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:44:00.178199  142849 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:44:00.178368  142849 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:44:00.181840  142849 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:44:00.181930  142849 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:44:00.182064  142849 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:44:00.182134  142849 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:44:00.182184  142849 kubeadm.go:318] OS: Linux
	I1009 18:44:00.182253  142849 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:44:00.182316  142849 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:44:00.182408  142849 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:44:00.182472  142849 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:44:00.182513  142849 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:44:00.182565  142849 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:44:00.182606  142849 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:44:00.182647  142849 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:44:00.182724  142849 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:44:00.182855  142849 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:44:00.182955  142849 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:44:00.183016  142849 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:44:00.185577  142849 out.go:252]   - Generating certificates and keys ...
	I1009 18:44:00.185654  142849 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:44:00.185727  142849 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:44:00.185786  142849 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:44:00.185838  142849 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:44:00.185892  142849 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:44:00.185937  142849 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:44:00.185983  142849 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:44:00.186154  142849 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-139298 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:44:00.186228  142849 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:44:00.186395  142849 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-139298 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:44:00.186503  142849 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:44:00.186569  142849 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:44:00.186612  142849 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:44:00.186663  142849 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:44:00.186713  142849 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:44:00.186766  142849 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:44:00.186818  142849 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:44:00.186889  142849 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:44:00.186941  142849 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:44:00.187015  142849 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:44:00.187083  142849 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:44:00.188574  142849 out.go:252]   - Booting up control plane ...
	I1009 18:44:00.188650  142849 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:44:00.188720  142849 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:44:00.188783  142849 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:44:00.188885  142849 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:44:00.188964  142849 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:44:00.189056  142849 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:44:00.189146  142849 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:44:00.189190  142849 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:44:00.189299  142849 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:44:00.189490  142849 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:44:00.189596  142849 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001006049s
	I1009 18:44:00.189741  142849 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:44:00.189830  142849 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:44:00.189954  142849 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:44:00.190024  142849 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:44:00.190085  142849 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000331782s
	I1009 18:44:00.190152  142849 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000711447s
	I1009 18:44:00.190226  142849 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000653521s
	I1009 18:44:00.190232  142849 kubeadm.go:318] 
	I1009 18:44:00.190308  142849 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:44:00.190404  142849 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:44:00.190483  142849 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:44:00.190579  142849 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:44:00.190712  142849 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:44:00.190792  142849 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:44:00.190851  142849 kubeadm.go:318] 
	W1009 18:44:00.191013  142849 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-139298 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-139298 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001006049s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000331782s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000711447s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000653521s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-139298 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-139298 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001006049s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000331782s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000711447s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000653521s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:44:00.191111  142849 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:44:00.639084  142849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:44:00.651955  142849 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:44:00.652011  142849 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:44:00.660326  142849 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:44:00.660345  142849 kubeadm.go:157] found existing configuration files:
	
	I1009 18:44:00.660406  142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:44:00.668277  142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:44:00.668397  142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:44:00.676109  142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:44:00.684114  142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:44:00.684181  142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:44:00.692197  142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:44:00.700419  142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:44:00.700503  142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:44:00.708357  142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:44:00.716362  142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:44:00.716452  142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:44:00.724570  142849 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:44:00.761995  142849 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:44:00.762077  142849 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:44:00.782893  142849 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:44:00.782982  142849 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:44:00.783039  142849 kubeadm.go:318] OS: Linux
	I1009 18:44:00.783107  142849 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:44:00.783169  142849 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:44:00.783224  142849 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:44:00.783299  142849 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:44:00.783346  142849 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:44:00.783416  142849 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:44:00.783457  142849 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:44:00.783499  142849 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:44:00.843821  142849 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:44:00.843995  142849 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:44:00.844145  142849 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:44:00.851166  142849 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:44:00.854256  142849 out.go:252]   - Generating certificates and keys ...
	I1009 18:44:00.854355  142849 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:44:00.854455  142849 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:44:00.854580  142849 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:44:00.854675  142849 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:44:00.854766  142849 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:44:00.854847  142849 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:44:00.854943  142849 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:44:00.855044  142849 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:44:00.855164  142849 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:44:00.855285  142849 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:44:00.855349  142849 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:44:00.855449  142849 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:44:01.055104  142849 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:44:01.286049  142849 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:44:01.840411  142849 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:44:01.938562  142849 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:44:02.214511  142849 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:44:02.215019  142849 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:44:02.217245  142849 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:44:02.220606  142849 out.go:252]   - Booting up control plane ...
	I1009 18:44:02.220730  142849 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:44:02.220814  142849 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:44:02.220876  142849 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:44:02.234340  142849 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:44:02.234550  142849 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:44:02.241242  142849 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:44:02.241467  142849 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:44:02.241561  142849 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:44:02.348245  142849 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:44:02.348415  142849 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:44:03.349141  142849 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001031751s
	I1009 18:44:03.352060  142849 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:44:03.352187  142849 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:44:03.352320  142849 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:44:03.352438  142849 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:48:03.353372  142849 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000972123s
	I1009 18:48:03.353526  142849 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001015431s
	I1009 18:48:03.353637  142849 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001317584s
	I1009 18:48:03.353649  142849 kubeadm.go:318] 
	I1009 18:48:03.353761  142849 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:48:03.353886  142849 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:48:03.354039  142849 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:48:03.354175  142849 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:48:03.354298  142849 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:48:03.354475  142849 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:48:03.354486  142849 kubeadm.go:318] 
	I1009 18:48:03.358291  142849 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:48:03.358473  142849 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:48:03.359297  142849 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:48:03.359418  142849 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:48:03.359556  142849 kubeadm.go:402] duration metric: took 8m8.482207871s to StartCluster
	I1009 18:48:03.359811  142849 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:48:03.359985  142849 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:48:03.388677  142849 cri.go:89] found id: ""
	I1009 18:48:03.388721  142849 logs.go:282] 0 containers: []
	W1009 18:48:03.388734  142849 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:48:03.388742  142849 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:48:03.388946  142849 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:48:03.415393  142849 cri.go:89] found id: ""
	I1009 18:48:03.415428  142849 logs.go:282] 0 containers: []
	W1009 18:48:03.415440  142849 logs.go:284] No container was found matching "etcd"
	I1009 18:48:03.415446  142849 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:48:03.415495  142849 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:48:03.442580  142849 cri.go:89] found id: ""
	I1009 18:48:03.442605  142849 logs.go:282] 0 containers: []
	W1009 18:48:03.442613  142849 logs.go:284] No container was found matching "coredns"
	I1009 18:48:03.442620  142849 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:48:03.442670  142849 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:48:03.470120  142849 cri.go:89] found id: ""
	I1009 18:48:03.470148  142849 logs.go:282] 0 containers: []
	W1009 18:48:03.470157  142849 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:48:03.470164  142849 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:48:03.470212  142849 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:48:03.498917  142849 cri.go:89] found id: ""
	I1009 18:48:03.498947  142849 logs.go:282] 0 containers: []
	W1009 18:48:03.498958  142849 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:48:03.498966  142849 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:48:03.499026  142849 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:48:03.526724  142849 cri.go:89] found id: ""
	I1009 18:48:03.526757  142849 logs.go:282] 0 containers: []
	W1009 18:48:03.526767  142849 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:48:03.526776  142849 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:48:03.526842  142849 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:48:03.554780  142849 cri.go:89] found id: ""
	I1009 18:48:03.554814  142849 logs.go:282] 0 containers: []
	W1009 18:48:03.554825  142849 logs.go:284] No container was found matching "kindnet"
	I1009 18:48:03.554840  142849 logs.go:123] Gathering logs for kubelet ...
	I1009 18:48:03.554860  142849 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:48:03.623582  142849 logs.go:123] Gathering logs for dmesg ...
	I1009 18:48:03.623621  142849 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:48:03.636753  142849 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:48:03.636783  142849 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:48:03.702919  142849 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:48:03.693280    2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:48:03.695178    2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:48:03.695797    2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:48:03.697455    2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:48:03.697952    2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:48:03.693280    2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:48:03.695178    2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:48:03.695797    2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:48:03.697455    2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:48:03.697952    2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:48:03.702953  142849 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:48:03.702983  142849 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:48:03.766952  142849 logs.go:123] Gathering logs for container status ...
	I1009 18:48:03.766996  142849 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 18:48:03.798751  142849 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001031751s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000972123s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001015431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001317584s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:48:03.798836  142849 out.go:285] * 
	* 
	W1009 18:48:03.798924  142849 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001031751s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000972123s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001015431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001317584s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001031751s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000972123s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001015431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001317584s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:48:03.798944  142849 out.go:285] * 
	* 
	W1009 18:48:03.800831  142849 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:48:03.804413  142849 out.go:203] 
	W1009 18:48:03.805486  142849 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001031751s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000972123s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001015431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001317584s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001031751s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000972123s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001015431s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001317584s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:48:03.805512  142849 out.go:285] * 
	* 
	I1009 18:48:03.807236  142849 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:110: out/minikube-linux-amd64 start -p addons-139298 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (514.48s)

                                                
                                    
x
+
TestErrorSpam/setup (498.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-656427 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-656427 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p nospam-656427 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-656427 --driver=docker  --container-runtime=crio: exit status 80 (8m18.432270429s)

                                                
                                                
-- stdout --
	* [nospam-656427] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "nospam-656427" primary control-plane node in "nospam-656427" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost nospam-656427] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-656427] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000967359s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000383148s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000536099s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000603331s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501382095s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000041789s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00016769s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00046887s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501382095s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000041789s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00016769s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00046887s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-linux-amd64 start -p nospam-656427 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-656427 --driver=docker  --container-runtime=crio" failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-kubelet-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/server\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/server serving cert is signed for DNS names [localhost nospam-656427] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/peer\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-656427] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/healthcheck-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-etcd-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"sa\" key and public key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 1.000967359s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000383148s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000536099s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.000603331s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "X Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 1.501382095s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000041789s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.00016769s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.00046887s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 1.501382095s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000041789s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.00016769s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.00046887s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:110: minikube stdout:
* [nospam-656427] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21683
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "nospam-656427" primary control-plane node in "nospam-656427" cluster
* Pulling base image v0.0.48-1759745255-21703 ...
* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost nospam-656427] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-656427] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.000967359s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000383148s
[control-plane-check] kube-apiserver is not healthy after 4m0.000536099s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000603331s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.501382095s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000041789s
[control-plane-check] kube-apiserver is not healthy after 4m0.00016769s
[control-plane-check] kube-controller-manager is not healthy after 4m0.00046887s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.501382095s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000041789s
[control-plane-check] kube-apiserver is not healthy after 4m0.00016769s
[control-plane-check] kube-controller-manager is not healthy after 4m0.00046887s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
--- FAIL: TestErrorSpam/setup (498.44s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (499.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-158523 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-158523 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: exit status 80 (8m18.259123811s)

                                                
                                                
-- stdout --
	* [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-158523" primary control-plane node in "functional-158523" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Found network options:
	  - HTTP_PROXY=localhost:41853
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:41853 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-158523 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-158523 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.02007ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001045398s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001124895s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001393531s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001784657s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000412328s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000683711s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000792078s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001784657s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000412328s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000683711s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000792078s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-amd64 start -p functional-158523 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-158523
helpers_test.go:243: (dbg) docker inspect functional-158523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	        "Created": "2025-10-09T18:56:39.519997973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:56:39.555178961Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hosts",
	        "LogPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4-json.log",
	        "Name": "/functional-158523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-158523:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-158523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	                "LowerDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-158523",
	                "Source": "/var/lib/docker/volumes/functional-158523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-158523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-158523",
	                "name.minikube.sigs.k8s.io": "functional-158523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50f61b8c99f766763e9e30d37584e537ddf10f74215a382a4221cbe7f7c2e821",
	            "SandboxKey": "/var/run/docker/netns/50f61b8c99f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-158523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:a1:34:24:78:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6347eb7b6b2842e266f51d3873c8aa169a4287ea52510c3d62dc4ed41012963c",
	                    "EndpointID": "eb1ada23cbc058d52037dd94574627dd1b0fedf455f291e656f050bf06881952",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-158523",
	                        "dea3735c567c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523: exit status 6 (307.666588ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:04:52.793880  160565 status.go:458] kubeconfig endpoint: get endpoint: "functional-158523" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 logs -n 25
helpers_test.go:260: TestFunctional/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-681935                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-681935   │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ delete  │ -p download-only-484045                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-484045   │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ start   │ --download-only -p download-docker-070263 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-070263 │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ delete  │ -p download-docker-070263                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-070263 │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ start   │ --download-only -p binary-mirror-721152 --alsologtostderr --binary-mirror http://127.0.0.1:36453 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-721152   │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ delete  │ -p binary-mirror-721152                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-721152   │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ addons  │ disable dashboard -p addons-139298                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-139298          │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ addons  │ enable dashboard -p addons-139298                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-139298          │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ start   │ -p addons-139298 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-139298          │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ delete  │ -p addons-139298                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-139298          │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │ 09 Oct 25 18:48 UTC │
	│ start   │ -p nospam-656427 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-656427 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │                     │
	│ start   │ nospam-656427 --log_dir /tmp/nospam-656427 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ nospam-656427 --log_dir /tmp/nospam-656427 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ nospam-656427 --log_dir /tmp/nospam-656427 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ pause   │ nospam-656427 --log_dir /tmp/nospam-656427 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ pause   │ nospam-656427 --log_dir /tmp/nospam-656427 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ pause   │ nospam-656427 --log_dir /tmp/nospam-656427 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ delete  │ -p nospam-656427                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p functional-158523 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-158523      │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:56:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:56:34.265222  155536 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:56:34.265498  155536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:56:34.265502  155536 out.go:374] Setting ErrFile to fd 2...
	I1009 18:56:34.265506  155536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:56:34.265722  155536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 18:56:34.266213  155536 out.go:368] Setting JSON to false
	I1009 18:56:34.267067  155536 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2343,"bootTime":1760033851,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:56:34.267165  155536 start.go:143] virtualization: kvm guest
	I1009 18:56:34.269631  155536 out.go:179] * [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:56:34.271011  155536 notify.go:221] Checking for updates...
	I1009 18:56:34.271038  155536 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 18:56:34.272444  155536 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:56:34.273704  155536 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 18:56:34.275030  155536 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 18:56:34.276374  155536 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:56:34.277653  155536 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:56:34.279185  155536 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 18:56:34.304765  155536 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:56:34.304943  155536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:56:34.370328  155536 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:56:34.360148262 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:56:34.370456  155536 docker.go:319] overlay module found
	I1009 18:56:34.372438  155536 out.go:179] * Using the docker driver based on user configuration
	I1009 18:56:34.373887  155536 start.go:309] selected driver: docker
	I1009 18:56:34.373899  155536 start.go:930] validating driver "docker" against <nil>
	I1009 18:56:34.373914  155536 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:56:34.374549  155536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:56:34.440614  155536 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:56:34.431079691 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:56:34.440779  155536 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 18:56:34.441016  155536 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:56:34.442930  155536 out.go:179] * Using Docker driver with root privileges
	I1009 18:56:34.444206  155536 cni.go:84] Creating CNI manager for ""
	I1009 18:56:34.444251  155536 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:56:34.444261  155536 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:56:34.444343  155536 start.go:353] cluster config:
	{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:56:34.445943  155536 out.go:179] * Starting "functional-158523" primary control-plane node in "functional-158523" cluster
	I1009 18:56:34.447061  155536 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 18:56:34.448266  155536 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:56:34.449267  155536 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:56:34.449308  155536 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:56:34.449317  155536 cache.go:58] Caching tarball of preloaded images
	I1009 18:56:34.449389  155536 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:56:34.449416  155536 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:56:34.449424  155536 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:56:34.449796  155536 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/config.json ...
	I1009 18:56:34.449828  155536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/config.json: {Name:mk63c4bc9d3515683be68725cedb0b7247c3b986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:56:34.469845  155536 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:56:34.469856  155536 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:56:34.469884  155536 cache.go:232] Successfully downloaded all kic artifacts
	I1009 18:56:34.469917  155536 start.go:361] acquireMachinesLock for functional-158523: {Name:mk995713bbd40419f859c4a8640c8ada0479020c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:56:34.470030  155536 start.go:365] duration metric: took 98.326µs to acquireMachinesLock for "functional-158523"
	I1009 18:56:34.470053  155536 start.go:94] Provisioning new machine with config: &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:56:34.470109  155536 start.go:126] createHost starting for "" (driver="docker")
	I1009 18:56:34.472072  155536 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1009 18:56:34.472280  155536 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:41853 to docker env.
	I1009 18:56:34.472302  155536 start.go:160] libmachine.API.Create for "functional-158523" (driver="docker")
	I1009 18:56:34.472321  155536 client.go:168] LocalClient.Create starting
	I1009 18:56:34.472404  155536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
	I1009 18:56:34.472435  155536 main.go:141] libmachine: Decoding PEM data...
	I1009 18:56:34.472446  155536 main.go:141] libmachine: Parsing certificate...
	I1009 18:56:34.472507  155536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
	I1009 18:56:34.472526  155536 main.go:141] libmachine: Decoding PEM data...
	I1009 18:56:34.472532  155536 main.go:141] libmachine: Parsing certificate...
	I1009 18:56:34.473263  155536 cli_runner.go:164] Run: docker network inspect functional-158523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:56:34.491365  155536 cli_runner.go:211] docker network inspect functional-158523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:56:34.491486  155536 network_create.go:284] running [docker network inspect functional-158523] to gather additional debugging logs...
	I1009 18:56:34.491512  155536 cli_runner.go:164] Run: docker network inspect functional-158523
	W1009 18:56:34.509243  155536 cli_runner.go:211] docker network inspect functional-158523 returned with exit code 1
	I1009 18:56:34.509271  155536 network_create.go:287] error running [docker network inspect functional-158523]: docker network inspect functional-158523: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-158523 not found
	I1009 18:56:34.509284  155536 network_create.go:289] output of [docker network inspect functional-158523]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-158523 not found
	
	** /stderr **
	I1009 18:56:34.509453  155536 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:56:34.527465  155536 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000345210}
	I1009 18:56:34.527528  155536 network_create.go:124] attempt to create docker network functional-158523 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:56:34.527598  155536 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-158523 functional-158523
	I1009 18:56:34.587228  155536 network_create.go:108] docker network functional-158523 192.168.49.0/24 created
	I1009 18:56:34.587250  155536 kic.go:121] calculated static IP "192.168.49.2" for the "functional-158523" container
	I1009 18:56:34.587313  155536 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:56:34.605963  155536 cli_runner.go:164] Run: docker volume create functional-158523 --label name.minikube.sigs.k8s.io=functional-158523 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:56:34.625392  155536 oci.go:103] Successfully created a docker volume functional-158523
	I1009 18:56:34.625476  155536 cli_runner.go:164] Run: docker run --rm --name functional-158523-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-158523 --entrypoint /usr/bin/test -v functional-158523:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:56:35.014344  155536 oci.go:107] Successfully prepared a docker volume functional-158523
	I1009 18:56:35.014395  155536 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:56:35.014420  155536 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:56:35.014477  155536 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-158523:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:56:39.447360  155536 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-158523:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.432844757s)
	I1009 18:56:39.447401  155536 kic.go:203] duration metric: took 4.432975375s to extract preloaded images to volume ...
	W1009 18:56:39.447497  155536 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:56:39.447522  155536 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:56:39.447566  155536 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:56:39.504138  155536 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-158523 --name functional-158523 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-158523 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-158523 --network functional-158523 --ip 192.168.49.2 --volume functional-158523:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:56:39.774356  155536 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Running}}
	I1009 18:56:39.792983  155536 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 18:56:39.811507  155536 cli_runner.go:164] Run: docker exec functional-158523 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:56:39.859220  155536 oci.go:144] the created container "functional-158523" has a running status.
	I1009 18:56:39.859244  155536 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa...
	I1009 18:56:40.057364  155536 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:56:40.094610  155536 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 18:56:40.113890  155536 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:56:40.113905  155536 kic_runner.go:114] Args: [docker exec --privileged functional-158523 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:56:40.166818  155536 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 18:56:40.187265  155536 machine.go:93] provisionDockerMachine start ...
	I1009 18:56:40.187350  155536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 18:56:40.207312  155536 main.go:141] libmachine: Using SSH client type: native
	I1009 18:56:40.207655  155536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:56:40.207663  155536 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:56:40.355629  155536 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 18:56:40.355653  155536 ubuntu.go:182] provisioning hostname "functional-158523"
	I1009 18:56:40.355728  155536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 18:56:40.375192  155536 main.go:141] libmachine: Using SSH client type: native
	I1009 18:56:40.375418  155536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:56:40.375426  155536 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-158523 && echo "functional-158523" | sudo tee /etc/hostname
	I1009 18:56:40.537396  155536 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 18:56:40.537461  155536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 18:56:40.555315  155536 main.go:141] libmachine: Using SSH client type: native
	I1009 18:56:40.555548  155536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:56:40.555561  155536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-158523' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-158523/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-158523' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:56:40.703483  155536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:56:40.703505  155536 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 18:56:40.703528  155536 ubuntu.go:190] setting up certificates
	I1009 18:56:40.703545  155536 provision.go:84] configureAuth start
	I1009 18:56:40.703605  155536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 18:56:40.721832  155536 provision.go:143] copyHostCerts
	I1009 18:56:40.721913  155536 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 18:56:40.721921  155536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 18:56:40.722002  155536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 18:56:40.722101  155536 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 18:56:40.722105  155536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 18:56:40.722134  155536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 18:56:40.722193  155536 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 18:56:40.722196  155536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 18:56:40.722220  155536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 18:56:40.722272  155536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.functional-158523 san=[127.0.0.1 192.168.49.2 functional-158523 localhost minikube]
	I1009 18:56:40.846103  155536 provision.go:177] copyRemoteCerts
	I1009 18:56:40.846158  155536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:56:40.846195  155536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 18:56:40.864395  155536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 18:56:40.967970  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:56:40.987882  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 18:56:41.006709  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:56:41.024558  155536 provision.go:87] duration metric: took 320.999178ms to configureAuth
	I1009 18:56:41.024583  155536 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:56:41.024766  155536 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:56:41.024858  155536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 18:56:41.042475  155536 main.go:141] libmachine: Using SSH client type: native
	I1009 18:56:41.042683  155536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:56:41.042692  155536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:56:41.298810  155536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:56:41.298826  155536 machine.go:96] duration metric: took 1.111547186s to provisionDockerMachine
	I1009 18:56:41.298835  155536 client.go:171] duration metric: took 6.826509657s to LocalClient.Create
	I1009 18:56:41.298851  155536 start.go:168] duration metric: took 6.826548611s to libmachine.API.Create "functional-158523"
	I1009 18:56:41.298858  155536 start.go:294] postStartSetup for "functional-158523" (driver="docker")
	I1009 18:56:41.298871  155536 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:56:41.298932  155536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:56:41.298974  155536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 18:56:41.317555  155536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 18:56:41.423551  155536 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:56:41.427214  155536 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:56:41.427238  155536 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:56:41.427252  155536 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 18:56:41.427310  155536 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 18:56:41.427404  155536 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 18:56:41.427476  155536 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts -> hosts in /etc/test/nested/copy/141519
	I1009 18:56:41.427510  155536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/141519
	I1009 18:56:41.435842  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 18:56:41.456255  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts --> /etc/test/nested/copy/141519/hosts (40 bytes)
	I1009 18:56:41.473719  155536 start.go:297] duration metric: took 174.844334ms for postStartSetup
	I1009 18:56:41.474062  155536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 18:56:41.491573  155536 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/config.json ...
	I1009 18:56:41.491866  155536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:56:41.491901  155536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 18:56:41.510370  155536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 18:56:41.610886  155536 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:56:41.615564  155536 start.go:129] duration metric: took 7.145436868s to createHost
	I1009 18:56:41.615581  155536 start.go:84] releasing machines lock for "functional-158523", held for 7.145544158s
	I1009 18:56:41.615652  155536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 18:56:41.635094  155536 out.go:179] * Found network options:
	I1009 18:56:41.636681  155536 out.go:179]   - HTTP_PROXY=localhost:41853
	W1009 18:56:41.637989  155536 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1009 18:56:41.639067  155536 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1009 18:56:41.640438  155536 ssh_runner.go:195] Run: cat /version.json
	I1009 18:56:41.640458  155536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:56:41.640473  155536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 18:56:41.640521  155536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 18:56:41.660092  155536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 18:56:41.660598  155536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 18:56:41.813396  155536 ssh_runner.go:195] Run: systemctl --version
	I1009 18:56:41.820248  155536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:56:41.856041  155536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:56:41.861091  155536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:56:41.861142  155536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:56:41.888189  155536 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:56:41.888204  155536 start.go:496] detecting cgroup driver to use...
	I1009 18:56:41.888233  155536 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:56:41.888271  155536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:56:41.905369  155536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:56:41.918019  155536 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:56:41.918062  155536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:56:41.935191  155536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:56:41.953008  155536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:56:42.034224  155536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:56:42.122607  155536 docker.go:234] disabling docker service ...
	I1009 18:56:42.122676  155536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:56:42.143107  155536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:56:42.156605  155536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:56:42.238240  155536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:56:42.320016  155536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:56:42.333361  155536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:56:42.348560  155536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:56:42.348611  155536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:56:42.360245  155536 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:56:42.360300  155536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:56:42.370045  155536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:56:42.379484  155536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:56:42.389887  155536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:56:42.399073  155536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:56:42.408963  155536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:56:42.423960  155536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:56:42.433606  155536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:56:42.441918  155536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:56:42.449883  155536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:56:42.526433  155536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:56:42.632968  155536 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:56:42.633027  155536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:56:42.637126  155536 start.go:564] Will wait 60s for crictl version
	I1009 18:56:42.637185  155536 ssh_runner.go:195] Run: which crictl
	I1009 18:56:42.640828  155536 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:56:42.667702  155536 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:56:42.667783  155536 ssh_runner.go:195] Run: crio --version
	I1009 18:56:42.696738  155536 ssh_runner.go:195] Run: crio --version
	I1009 18:56:42.727164  155536 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:56:42.728543  155536 cli_runner.go:164] Run: docker network inspect functional-158523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:56:42.745839  155536 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:56:42.750077  155536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:56:42.760412  155536 kubeadm.go:883] updating cluster {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:56:42.760522  155536 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:56:42.760564  155536 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:56:42.792883  155536 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:56:42.792896  155536 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:56:42.792943  155536 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:56:42.820022  155536 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:56:42.820038  155536 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:56:42.820047  155536 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 18:56:42.820164  155536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-158523 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:56:42.820236  155536 ssh_runner.go:195] Run: crio config
	I1009 18:56:42.869029  155536 cni.go:84] Creating CNI manager for ""
	I1009 18:56:42.869039  155536 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:56:42.869058  155536 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:56:42.869081  155536 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-158523 NodeName:functional-158523 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:56:42.869193  155536 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-158523"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:56:42.869244  155536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:56:42.877472  155536 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 18:56:42.877532  155536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:56:42.885729  155536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 18:56:42.898932  155536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:56:42.914449  155536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1009 18:56:42.927225  155536 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:56:42.931056  155536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:56:42.941183  155536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:56:43.017860  155536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:56:43.040788  155536 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523 for IP: 192.168.49.2
	I1009 18:56:43.040801  155536 certs.go:195] generating shared ca certs ...
	I1009 18:56:43.040817  155536 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:56:43.040973  155536 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 18:56:43.041015  155536 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 18:56:43.041024  155536 certs.go:257] generating profile certs ...
	I1009 18:56:43.041088  155536 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key
	I1009 18:56:43.041111  155536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt with IP's: []
	I1009 18:56:43.222226  155536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt ...
	I1009 18:56:43.222246  155536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: {Name:mk67edaa9578a8173e8f4cdbff999e77a0f83886 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:56:43.226598  155536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key ...
	I1009 18:56:43.226611  155536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key: {Name:mk226335da885af963a99a7cf87b027e778d8634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:56:43.226706  155536 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key.1809350a
	I1009 18:56:43.226717  155536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt.1809350a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 18:56:43.354044  155536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt.1809350a ...
	I1009 18:56:43.354067  155536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt.1809350a: {Name:mk8ddd801e55e68a618e2a6570d053886411ec94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:56:43.354266  155536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key.1809350a ...
	I1009 18:56:43.354275  155536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key.1809350a: {Name:mkd87d655e97e3b5ffd85017f03770a0e69863e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:56:43.354353  155536 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt.1809350a -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt
	I1009 18:56:43.354489  155536 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key.1809350a -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key
	I1009 18:56:43.354559  155536 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key
	I1009 18:56:43.354572  155536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt with IP's: []
	I1009 18:56:43.402939  155536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt ...
	I1009 18:56:43.402960  155536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt: {Name:mk8a969ea87b3ba84e24d28ab68c2a2ac18b5da6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:56:43.403170  155536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key ...
	I1009 18:56:43.403179  155536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key: {Name:mk0b6c292582314875e9a97920b6b03d1a4a1cbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:56:43.403390  155536 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 18:56:43.403430  155536 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 18:56:43.403436  155536 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:56:43.403458  155536 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:56:43.403478  155536 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:56:43.403498  155536 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 18:56:43.403536  155536 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 18:56:43.404117  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:56:43.423445  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:56:43.441943  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:56:43.460327  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:56:43.478655  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:56:43.496367  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 18:56:43.514658  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:56:43.532933  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 18:56:43.551007  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 18:56:43.570622  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 18:56:43.588555  155536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:56:43.606531  155536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:56:43.619369  155536 ssh_runner.go:195] Run: openssl version
	I1009 18:56:43.625914  155536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 18:56:43.634659  155536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 18:56:43.638623  155536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 18:56:43.638672  155536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 18:56:43.673815  155536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 18:56:43.683328  155536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 18:56:43.692340  155536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 18:56:43.696508  155536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 18:56:43.696557  155536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 18:56:43.732272  155536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:56:43.741593  155536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:56:43.750348  155536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:56:43.754272  155536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:56:43.754330  155536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:56:43.788723  155536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:56:43.798024  155536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:56:43.802030  155536 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:56:43.802106  155536 kubeadm.go:400] StartCluster: {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:56:43.802179  155536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:56:43.802227  155536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:56:43.831676  155536 cri.go:89] found id: ""
	I1009 18:56:43.831726  155536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:56:43.840315  155536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:56:43.848976  155536 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:56:43.849040  155536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:56:43.856991  155536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:56:43.857007  155536 kubeadm.go:157] found existing configuration files:
	
	I1009 18:56:43.857048  155536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 18:56:43.864736  155536 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:56:43.864776  155536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:56:43.872088  155536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 18:56:43.880252  155536 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:56:43.880309  155536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:56:43.888669  155536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 18:56:43.897246  155536 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:56:43.897288  155536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:56:43.906264  155536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 18:56:43.914776  155536 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:56:43.914823  155536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:56:43.922524  155536 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:56:43.983812  155536 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:56:44.044204  155536 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:00:48.607288  155536 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:00:48.607427  155536 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:00:48.610748  155536 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:00:48.610844  155536 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:00:48.610973  155536 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:00:48.611019  155536 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:00:48.611047  155536 kubeadm.go:318] OS: Linux
	I1009 19:00:48.611134  155536 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:00:48.611192  155536 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:00:48.611238  155536 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:00:48.611276  155536 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:00:48.611329  155536 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:00:48.611368  155536 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:00:48.611443  155536 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:00:48.611500  155536 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:00:48.611568  155536 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:00:48.611646  155536 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:00:48.611734  155536 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:00:48.611788  155536 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:00:48.614414  155536 out.go:252]   - Generating certificates and keys ...
	I1009 19:00:48.614493  155536 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:00:48.614548  155536 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:00:48.614603  155536 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:00:48.614681  155536 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:00:48.614745  155536 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:00:48.614798  155536 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:00:48.614846  155536 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:00:48.614942  155536 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [functional-158523 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:00:48.614982  155536 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:00:48.615084  155536 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [functional-158523 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:00:48.615137  155536 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:00:48.615183  155536 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:00:48.615216  155536 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:00:48.615262  155536 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:00:48.615299  155536 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:00:48.615368  155536 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:00:48.615450  155536 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:00:48.615515  155536 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:00:48.615570  155536 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:00:48.615647  155536 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:00:48.615697  155536 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:00:48.617250  155536 out.go:252]   - Booting up control plane ...
	I1009 19:00:48.617313  155536 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:00:48.617373  155536 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:00:48.617432  155536 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:00:48.617541  155536 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:00:48.617611  155536 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:00:48.617722  155536 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:00:48.617804  155536 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:00:48.617850  155536 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:00:48.617960  155536 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:00:48.618057  155536 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:00:48.618118  155536 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.02007ms
	I1009 19:00:48.618193  155536 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:00:48.618260  155536 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 19:00:48.618345  155536 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:00:48.618424  155536 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:00:48.618484  155536 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001045398s
	I1009 19:00:48.618543  155536 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001124895s
	I1009 19:00:48.618608  155536 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001393531s
	I1009 19:00:48.618614  155536 kubeadm.go:318] 
	I1009 19:00:48.618694  155536 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:00:48.618766  155536 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:00:48.618836  155536 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:00:48.618934  155536 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:00:48.619017  155536 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:00:48.619089  155536 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:00:48.619162  155536 kubeadm.go:318] 
	W1009 19:00:48.619262  155536 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-158523 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-158523 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.02007ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001045398s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001124895s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001393531s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:00:48.619350  155536 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:00:49.065432  155536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:00:49.078879  155536 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:00:49.078946  155536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:00:49.087257  155536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:00:49.087266  155536 kubeadm.go:157] found existing configuration files:
	
	I1009 19:00:49.087307  155536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 19:00:49.095118  155536 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:00:49.095183  155536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:00:49.102718  155536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 19:00:49.110575  155536 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:00:49.110617  155536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:00:49.118022  155536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 19:00:49.125966  155536 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:00:49.126017  155536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:00:49.133522  155536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 19:00:49.141180  155536 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:00:49.141226  155536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:00:49.148809  155536 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:00:49.205844  155536 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:00:49.265982  155536 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:04:52.023900  155536 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 19:04:52.024101  155536 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:04:52.027203  155536 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:04:52.027280  155536 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:04:52.027428  155536 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:04:52.027506  155536 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:04:52.027556  155536 kubeadm.go:318] OS: Linux
	I1009 19:04:52.027624  155536 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:04:52.027676  155536 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:04:52.027741  155536 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:04:52.027810  155536 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:04:52.027872  155536 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:04:52.027935  155536 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:04:52.028005  155536 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:04:52.028059  155536 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:04:52.028156  155536 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:04:52.028283  155536 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:04:52.028415  155536 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:04:52.028499  155536 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:04:52.032251  155536 out.go:252]   - Generating certificates and keys ...
	I1009 19:04:52.032324  155536 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:04:52.032412  155536 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:04:52.032495  155536 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:04:52.032569  155536 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:04:52.032637  155536 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:04:52.032718  155536 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:04:52.032774  155536 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:04:52.032846  155536 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:04:52.032911  155536 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:04:52.032987  155536 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:04:52.033018  155536 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:04:52.033063  155536 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:04:52.033123  155536 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:04:52.033169  155536 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:04:52.033208  155536 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:04:52.033256  155536 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:04:52.033296  155536 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:04:52.033359  155536 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:04:52.033445  155536 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:04:52.034957  155536 out.go:252]   - Booting up control plane ...
	I1009 19:04:52.035031  155536 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:04:52.035094  155536 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:04:52.035147  155536 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:04:52.035256  155536 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:04:52.035346  155536 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:04:52.035457  155536 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:04:52.035520  155536 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:04:52.035558  155536 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:04:52.035679  155536 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:04:52.035769  155536 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:04:52.035816  155536 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001784657s
	I1009 19:04:52.035896  155536 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:04:52.035977  155536 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 19:04:52.036050  155536 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:04:52.036113  155536 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:04:52.036176  155536 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000412328s
	I1009 19:04:52.036244  155536 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000683711s
	I1009 19:04:52.036303  155536 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000792078s
	I1009 19:04:52.036305  155536 kubeadm.go:318] 
	I1009 19:04:52.036418  155536 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:04:52.036493  155536 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:04:52.036570  155536 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:04:52.036656  155536 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:04:52.036724  155536 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:04:52.036797  155536 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:04:52.036801  155536 kubeadm.go:318] 
	I1009 19:04:52.036876  155536 kubeadm.go:402] duration metric: took 8m8.234791012s to StartCluster
	I1009 19:04:52.036926  155536 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:04:52.036980  155536 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:04:52.065309  155536 cri.go:89] found id: ""
	I1009 19:04:52.065341  155536 logs.go:282] 0 containers: []
	W1009 19:04:52.065352  155536 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:04:52.065359  155536 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:04:52.065439  155536 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:04:52.091799  155536 cri.go:89] found id: ""
	I1009 19:04:52.091815  155536 logs.go:282] 0 containers: []
	W1009 19:04:52.091822  155536 logs.go:284] No container was found matching "etcd"
	I1009 19:04:52.091829  155536 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:04:52.091897  155536 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:04:52.119809  155536 cri.go:89] found id: ""
	I1009 19:04:52.119827  155536 logs.go:282] 0 containers: []
	W1009 19:04:52.119836  155536 logs.go:284] No container was found matching "coredns"
	I1009 19:04:52.119843  155536 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:04:52.119892  155536 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:04:52.147234  155536 cri.go:89] found id: ""
	I1009 19:04:52.147249  155536 logs.go:282] 0 containers: []
	W1009 19:04:52.147255  155536 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:04:52.147261  155536 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:04:52.147316  155536 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:04:52.174332  155536 cri.go:89] found id: ""
	I1009 19:04:52.174347  155536 logs.go:282] 0 containers: []
	W1009 19:04:52.174353  155536 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:04:52.174358  155536 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:04:52.174434  155536 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:04:52.201274  155536 cri.go:89] found id: ""
	I1009 19:04:52.201295  155536 logs.go:282] 0 containers: []
	W1009 19:04:52.201305  155536 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:04:52.201314  155536 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:04:52.201372  155536 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:04:52.229351  155536 cri.go:89] found id: ""
	I1009 19:04:52.229368  155536 logs.go:282] 0 containers: []
	W1009 19:04:52.229375  155536 logs.go:284] No container was found matching "kindnet"
	I1009 19:04:52.229404  155536 logs.go:123] Gathering logs for kubelet ...
	I1009 19:04:52.229419  155536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:04:52.294610  155536 logs.go:123] Gathering logs for dmesg ...
	I1009 19:04:52.294633  155536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:04:52.307195  155536 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:04:52.307216  155536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:04:52.370473  155536 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:04:52.362119    2433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:04:52.362781    2433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:04:52.364452    2433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:04:52.364937    2433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:04:52.366530    2433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:04:52.362119    2433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:04:52.362781    2433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:04:52.364452    2433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:04:52.364937    2433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:04:52.366530    2433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:04:52.370535  155536 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:04:52.370552  155536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:04:52.432482  155536 logs.go:123] Gathering logs for container status ...
	I1009 19:04:52.432508  155536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:04:52.463045  155536 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001784657s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000412328s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000683711s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000792078s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:04:52.463102  155536 out.go:285] * 
	W1009 19:04:52.463323  155536 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001784657s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000412328s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000683711s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000792078s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:04:52.463354  155536 out.go:285] * 
	W1009 19:04:52.465434  155536 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:04:52.469314  155536 out.go:203] 
	W1009 19:04:52.470527  155536 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001784657s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000412328s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000683711s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000792078s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:04:52.470549  155536 out.go:285] * 
	I1009 19:04:52.472948  155536 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:04:49 functional-158523 crio[788]: time="2025-10-09T19:04:49.626117159Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:04:49 functional-158523 crio[788]: time="2025-10-09T19:04:49.626546584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:04:49 functional-158523 crio[788]: time="2025-10-09T19:04:49.628153072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:04:49 functional-158523 crio[788]: time="2025-10-09T19:04:49.628730415Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:04:49 functional-158523 crio[788]: time="2025-10-09T19:04:49.646566155Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f6516840-e392-4f78-9dc7-dd54aa557608 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:49 functional-158523 crio[788]: time="2025-10-09T19:04:49.647708765Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=57afba84-bf7b-46b6-878b-dc815abf51cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:49 functional-158523 crio[788]: time="2025-10-09T19:04:49.648056384Z" level=info msg="createCtr: deleting container ID d31400e8ae6101e7d76bf6c5d482239527cd7d849d627ffa64b862c873b0e314 from idIndex" id=f6516840-e392-4f78-9dc7-dd54aa557608 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:49 functional-158523 crio[788]: time="2025-10-09T19:04:49.648087402Z" level=info msg="createCtr: removing container d31400e8ae6101e7d76bf6c5d482239527cd7d849d627ffa64b862c873b0e314" id=f6516840-e392-4f78-9dc7-dd54aa557608 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:49 functional-158523 crio[788]: time="2025-10-09T19:04:49.648120095Z" level=info msg="createCtr: deleting container d31400e8ae6101e7d76bf6c5d482239527cd7d849d627ffa64b862c873b0e314 from storage" id=f6516840-e392-4f78-9dc7-dd54aa557608 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:49 functional-158523 crio[788]: time="2025-10-09T19:04:49.649052787Z" level=info msg="createCtr: deleting container ID d88ded24c16bb243586418f054a1bc25872ce3dbe9240a8a97bedcbd010e2cab from idIndex" id=57afba84-bf7b-46b6-878b-dc815abf51cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:49 functional-158523 crio[788]: time="2025-10-09T19:04:49.649088252Z" level=info msg="createCtr: removing container d88ded24c16bb243586418f054a1bc25872ce3dbe9240a8a97bedcbd010e2cab" id=57afba84-bf7b-46b6-878b-dc815abf51cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:49 functional-158523 crio[788]: time="2025-10-09T19:04:49.649131772Z" level=info msg="createCtr: deleting container d88ded24c16bb243586418f054a1bc25872ce3dbe9240a8a97bedcbd010e2cab from storage" id=57afba84-bf7b-46b6-878b-dc815abf51cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:49 functional-158523 crio[788]: time="2025-10-09T19:04:49.652049707Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-158523_kube-system_589c70f36d169281ef056387fc3a74a2_0" id=f6516840-e392-4f78-9dc7-dd54aa557608 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:49 functional-158523 crio[788]: time="2025-10-09T19:04:49.652427814Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-158523_kube-system_bbd906eec6f9b7c1a1a340fc9a9fdcd1_0" id=57afba84-bf7b-46b6-878b-dc815abf51cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:51 functional-158523 crio[788]: time="2025-10-09T19:04:51.618254411Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=4d4c215a-0fcd-4081-8a5d-7ed5260cacb2 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:04:51 functional-158523 crio[788]: time="2025-10-09T19:04:51.618995501Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=64bb40fb-2726-4ffe-9765-eba9425da482 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:04:51 functional-158523 crio[788]: time="2025-10-09T19:04:51.61983211Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-158523/kube-controller-manager" id=aeff0653-a1b1-484a-8e90-357e25e0b433 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:51 functional-158523 crio[788]: time="2025-10-09T19:04:51.620043118Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:04:51 functional-158523 crio[788]: time="2025-10-09T19:04:51.624059754Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:04:51 functional-158523 crio[788]: time="2025-10-09T19:04:51.624570303Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:04:51 functional-158523 crio[788]: time="2025-10-09T19:04:51.63849783Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=aeff0653-a1b1-484a-8e90-357e25e0b433 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:51 functional-158523 crio[788]: time="2025-10-09T19:04:51.639868845Z" level=info msg="createCtr: deleting container ID 8119400ebbbc3974014ea65273971ec2cd4a1a95e3f69f2ddd1c334d1ef6e792 from idIndex" id=aeff0653-a1b1-484a-8e90-357e25e0b433 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:51 functional-158523 crio[788]: time="2025-10-09T19:04:51.639909778Z" level=info msg="createCtr: removing container 8119400ebbbc3974014ea65273971ec2cd4a1a95e3f69f2ddd1c334d1ef6e792" id=aeff0653-a1b1-484a-8e90-357e25e0b433 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:51 functional-158523 crio[788]: time="2025-10-09T19:04:51.639943146Z" level=info msg="createCtr: deleting container 8119400ebbbc3974014ea65273971ec2cd4a1a95e3f69f2ddd1c334d1ef6e792 from storage" id=aeff0653-a1b1-484a-8e90-357e25e0b433 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:51 functional-158523 crio[788]: time="2025-10-09T19:04:51.64227003Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-158523_kube-system_08a2248bbc397b9ed927890e2073120b_0" id=aeff0653-a1b1-484a-8e90-357e25e0b433 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:04:53.401246    2580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:04:53.401758    2580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:04:53.403326    2580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:04:53.403819    2580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:04:53.405054    2580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:04:53 up 47 min,  0 user,  load average: 0.08, 0.45, 13.88
	Linux functional-158523 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:04:49 functional-158523 kubelet[1810]: E1009 19:04:49.618459    1810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:04:49 functional-158523 kubelet[1810]: E1009 19:04:49.618594    1810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:04:49 functional-158523 kubelet[1810]: E1009 19:04:49.652423    1810 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:04:49 functional-158523 kubelet[1810]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:04:49 functional-158523 kubelet[1810]:  > podSandboxID="8e9b8d6f8f5607eade31cf47137dabb7c979b7a05be5d892419ed28c4be5e916"
	Oct 09 19:04:49 functional-158523 kubelet[1810]: E1009 19:04:49.652535    1810 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:04:49 functional-158523 kubelet[1810]:         container kube-scheduler start failed in pod kube-scheduler-functional-158523_kube-system(589c70f36d169281ef056387fc3a74a2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:04:49 functional-158523 kubelet[1810]:  > logger="UnhandledError"
	Oct 09 19:04:49 functional-158523 kubelet[1810]: E1009 19:04:49.652577    1810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-158523" podUID="589c70f36d169281ef056387fc3a74a2"
	Oct 09 19:04:49 functional-158523 kubelet[1810]: E1009 19:04:49.652703    1810 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:04:49 functional-158523 kubelet[1810]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:04:49 functional-158523 kubelet[1810]:  > podSandboxID="e6a4bc1b2df9d751888af8288e7c4c569afb0335567fe2f74c173dbe4e47f513"
	Oct 09 19:04:49 functional-158523 kubelet[1810]: E1009 19:04:49.652789    1810 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:04:49 functional-158523 kubelet[1810]:         container kube-apiserver start failed in pod kube-apiserver-functional-158523_kube-system(bbd906eec6f9b7c1a1a340fc9a9fdcd1): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:04:49 functional-158523 kubelet[1810]:  > logger="UnhandledError"
	Oct 09 19:04:49 functional-158523 kubelet[1810]: E1009 19:04:49.653847    1810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-158523" podUID="bbd906eec6f9b7c1a1a340fc9a9fdcd1"
	Oct 09 19:04:51 functional-158523 kubelet[1810]: E1009 19:04:51.617893    1810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:04:51 functional-158523 kubelet[1810]: E1009 19:04:51.629998    1810 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-158523\" not found"
	Oct 09 19:04:51 functional-158523 kubelet[1810]: E1009 19:04:51.642620    1810 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:04:51 functional-158523 kubelet[1810]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:04:51 functional-158523 kubelet[1810]:  > podSandboxID="1577439806fcd9d603693a21a1b77ea4da9104d29c8aecd0dc0681165a9e1de2"
	Oct 09 19:04:51 functional-158523 kubelet[1810]: E1009 19:04:51.642749    1810 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:04:51 functional-158523 kubelet[1810]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-158523_kube-system(08a2248bbc397b9ed927890e2073120b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:04:51 functional-158523 kubelet[1810]:  > logger="UnhandledError"
	Oct 09 19:04:51 functional-158523 kubelet[1810]: E1009 19:04:51.642783    1810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-158523" podUID="08a2248bbc397b9ed927890e2073120b"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523: exit status 6 (303.454814ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:04:53.796418  160892 status.go:458] kubeconfig endpoint: get endpoint: "functional-158523" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-158523" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (499.59s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (366.78s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1009 19:04:53.814062  141519 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-158523 --alsologtostderr -v=8
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-158523 --alsologtostderr -v=8: exit status 80 (6m4.131620415s)

                                                
                                                
-- stdout --
	* [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-158523" primary control-plane node in "functional-158523" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:04:53.859600  161014 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:04:53.859894  161014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:53.859904  161014 out.go:374] Setting ErrFile to fd 2...
	I1009 19:04:53.859909  161014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:53.860103  161014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:04:53.860622  161014 out.go:368] Setting JSON to false
	I1009 19:04:53.861569  161014 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2843,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:04:53.861680  161014 start.go:143] virtualization: kvm guest
	I1009 19:04:53.864538  161014 out.go:179] * [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:04:53.866020  161014 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:04:53.866041  161014 notify.go:221] Checking for updates...
	I1009 19:04:53.868520  161014 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:04:53.869799  161014 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:53.871001  161014 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:04:53.872350  161014 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:04:53.873695  161014 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:04:53.875515  161014 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:53.875628  161014 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:04:53.899122  161014 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:04:53.899239  161014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:04:53.961702  161014 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:04:53.950772825 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:04:53.961810  161014 docker.go:319] overlay module found
	I1009 19:04:53.963901  161014 out.go:179] * Using the docker driver based on existing profile
	I1009 19:04:53.965359  161014 start.go:309] selected driver: docker
	I1009 19:04:53.965397  161014 start.go:930] validating driver "docker" against &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:04:53.965505  161014 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:04:53.965601  161014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:04:54.024534  161014 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:04:54.014787007 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:04:54.025138  161014 cni.go:84] Creating CNI manager for ""
	I1009 19:04:54.025189  161014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:04:54.025246  161014 start.go:353] cluster config:
	{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:04:54.027519  161014 out.go:179] * Starting "functional-158523" primary control-plane node in "functional-158523" cluster
	I1009 19:04:54.028967  161014 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:04:54.030473  161014 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:04:54.031821  161014 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:04:54.031876  161014 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:04:54.031885  161014 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:04:54.031986  161014 cache.go:58] Caching tarball of preloaded images
	I1009 19:04:54.032085  161014 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:04:54.032098  161014 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:04:54.032213  161014 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/config.json ...
	I1009 19:04:54.053026  161014 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:04:54.053045  161014 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:04:54.053063  161014 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:04:54.053096  161014 start.go:361] acquireMachinesLock for functional-158523: {Name:mk995713bbd40419f859c4a8640c8ada0479020c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:04:54.053186  161014 start.go:365] duration metric: took 46.429µs to acquireMachinesLock for "functional-158523"
	I1009 19:04:54.053209  161014 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:04:54.053220  161014 fix.go:55] fixHost starting: 
	I1009 19:04:54.053511  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:54.070674  161014 fix.go:113] recreateIfNeeded on functional-158523: state=Running err=<nil>
	W1009 19:04:54.070714  161014 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:04:54.072611  161014 out.go:252] * Updating the running docker "functional-158523" container ...
	I1009 19:04:54.072644  161014 machine.go:93] provisionDockerMachine start ...
	I1009 19:04:54.072732  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:54.089158  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:54.089398  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:54.089417  161014 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:04:54.234516  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 19:04:54.234543  161014 ubuntu.go:182] provisioning hostname "functional-158523"
	I1009 19:04:54.234606  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:54.252690  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:54.252942  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:54.252960  161014 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-158523 && echo "functional-158523" | sudo tee /etc/hostname
	I1009 19:04:54.409130  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 19:04:54.409240  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:54.428592  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:54.428819  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:54.428839  161014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-158523' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-158523/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-158523' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:04:54.575221  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:04:54.575248  161014 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:04:54.575298  161014 ubuntu.go:190] setting up certificates
	I1009 19:04:54.575313  161014 provision.go:84] configureAuth start
	I1009 19:04:54.575366  161014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 19:04:54.593157  161014 provision.go:143] copyHostCerts
	I1009 19:04:54.593200  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:04:54.593229  161014 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:04:54.593244  161014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:04:54.593315  161014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:04:54.593491  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:04:54.593517  161014 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:04:54.593524  161014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:04:54.593557  161014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:04:54.593615  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:04:54.593632  161014 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:04:54.593638  161014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:04:54.593693  161014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:04:54.593752  161014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.functional-158523 san=[127.0.0.1 192.168.49.2 functional-158523 localhost minikube]
	I1009 19:04:54.998231  161014 provision.go:177] copyRemoteCerts
	I1009 19:04:54.998297  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:04:54.998335  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.016505  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.120020  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:04:55.120077  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:04:55.138116  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:04:55.138187  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:04:55.157031  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:04:55.157100  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:04:55.176045  161014 provision.go:87] duration metric: took 600.715143ms to configureAuth
	I1009 19:04:55.176080  161014 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:04:55.176245  161014 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:55.176357  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.194450  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:55.194679  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:55.194701  161014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:04:55.467764  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:04:55.467789  161014 machine.go:96] duration metric: took 1.395134259s to provisionDockerMachine
	I1009 19:04:55.467804  161014 start.go:294] postStartSetup for "functional-158523" (driver="docker")
	I1009 19:04:55.467821  161014 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:04:55.467882  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:04:55.467922  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.486353  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.591117  161014 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:04:55.594855  161014 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1009 19:04:55.594886  161014 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1009 19:04:55.594893  161014 command_runner.go:130] > VERSION_ID="12"
	I1009 19:04:55.594900  161014 command_runner.go:130] > VERSION="12 (bookworm)"
	I1009 19:04:55.594907  161014 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1009 19:04:55.594911  161014 command_runner.go:130] > ID=debian
	I1009 19:04:55.594915  161014 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1009 19:04:55.594920  161014 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1009 19:04:55.594926  161014 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1009 19:04:55.594992  161014 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:04:55.595011  161014 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:04:55.595023  161014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:04:55.595090  161014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:04:55.595204  161014 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:04:55.595227  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:04:55.595320  161014 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts -> hosts in /etc/test/nested/copy/141519
	I1009 19:04:55.595330  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts -> /etc/test/nested/copy/141519/hosts
	I1009 19:04:55.595388  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/141519
	I1009 19:04:55.603244  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:04:55.621701  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts --> /etc/test/nested/copy/141519/hosts (40 bytes)
	I1009 19:04:55.640532  161014 start.go:297] duration metric: took 172.708538ms for postStartSetup
	I1009 19:04:55.640625  161014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:04:55.640672  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.658424  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.758913  161014 command_runner.go:130] > 38%
	I1009 19:04:55.759004  161014 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:04:55.763762  161014 command_runner.go:130] > 182G
	I1009 19:04:55.763807  161014 fix.go:57] duration metric: took 1.710584464s for fixHost
	I1009 19:04:55.763821  161014 start.go:84] releasing machines lock for "functional-158523", held for 1.710622732s
	I1009 19:04:55.763882  161014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 19:04:55.781557  161014 ssh_runner.go:195] Run: cat /version.json
	I1009 19:04:55.781620  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.781568  161014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:04:55.781740  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.800026  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.800289  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.899840  161014 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1009 19:04:55.953125  161014 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 19:04:55.955421  161014 ssh_runner.go:195] Run: systemctl --version
	I1009 19:04:55.962169  161014 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1009 19:04:55.962207  161014 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1009 19:04:55.962422  161014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:04:56.001789  161014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 19:04:56.006364  161014 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 19:04:56.006710  161014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:04:56.006818  161014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:04:56.015207  161014 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:04:56.015234  161014 start.go:496] detecting cgroup driver to use...
	I1009 19:04:56.015270  161014 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:04:56.015326  161014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:04:56.030444  161014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:04:56.043355  161014 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:04:56.043439  161014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:04:56.058903  161014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:04:56.072794  161014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:04:56.155598  161014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:04:56.243484  161014 docker.go:234] disabling docker service ...
	I1009 19:04:56.243560  161014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:04:56.258472  161014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:04:56.271168  161014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:04:56.357916  161014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:04:56.444044  161014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:04:56.457436  161014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:04:56.471973  161014 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1009 19:04:56.472020  161014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:04:56.472074  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.481231  161014 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:04:56.481304  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.490735  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.499743  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.508857  161014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:04:56.517176  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.525878  161014 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.534146  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.542852  161014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:04:56.549944  161014 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 19:04:56.550015  161014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:04:56.557444  161014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:04:56.640120  161014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:04:56.755858  161014 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:04:56.755937  161014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:04:56.760115  161014 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1009 19:04:56.760139  161014 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 19:04:56.760145  161014 command_runner.go:130] > Device: 0,59	Inode: 3908        Links: 1
	I1009 19:04:56.760152  161014 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 19:04:56.760157  161014 command_runner.go:130] > Access: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760162  161014 command_runner.go:130] > Modify: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760167  161014 command_runner.go:130] > Change: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760171  161014 command_runner.go:130] >  Birth: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760191  161014 start.go:564] Will wait 60s for crictl version
	I1009 19:04:56.760238  161014 ssh_runner.go:195] Run: which crictl
	I1009 19:04:56.764068  161014 command_runner.go:130] > /usr/local/bin/crictl
	I1009 19:04:56.764145  161014 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:04:56.790045  161014 command_runner.go:130] > Version:  0.1.0
	I1009 19:04:56.790068  161014 command_runner.go:130] > RuntimeName:  cri-o
	I1009 19:04:56.790072  161014 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1009 19:04:56.790077  161014 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 19:04:56.790095  161014 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:04:56.790164  161014 ssh_runner.go:195] Run: crio --version
	I1009 19:04:56.817435  161014 command_runner.go:130] > crio version 1.34.1
	I1009 19:04:56.817460  161014 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 19:04:56.817466  161014 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 19:04:56.817470  161014 command_runner.go:130] >    GitTreeState:   dirty
	I1009 19:04:56.817475  161014 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 19:04:56.817480  161014 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 19:04:56.817483  161014 command_runner.go:130] >    Compiler:       gc
	I1009 19:04:56.817488  161014 command_runner.go:130] >    Platform:       linux/amd64
	I1009 19:04:56.817492  161014 command_runner.go:130] >    Linkmode:       static
	I1009 19:04:56.817496  161014 command_runner.go:130] >    BuildTags:
	I1009 19:04:56.817499  161014 command_runner.go:130] >      static
	I1009 19:04:56.817503  161014 command_runner.go:130] >      netgo
	I1009 19:04:56.817506  161014 command_runner.go:130] >      osusergo
	I1009 19:04:56.817510  161014 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 19:04:56.817514  161014 command_runner.go:130] >      seccomp
	I1009 19:04:56.817518  161014 command_runner.go:130] >      apparmor
	I1009 19:04:56.817521  161014 command_runner.go:130] >      selinux
	I1009 19:04:56.817525  161014 command_runner.go:130] >    LDFlags:          unknown
	I1009 19:04:56.817531  161014 command_runner.go:130] >    SeccompEnabled:   true
	I1009 19:04:56.817535  161014 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 19:04:56.819047  161014 ssh_runner.go:195] Run: crio --version
	I1009 19:04:56.846110  161014 command_runner.go:130] > crio version 1.34.1
	I1009 19:04:56.846137  161014 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 19:04:56.846145  161014 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 19:04:56.846154  161014 command_runner.go:130] >    GitTreeState:   dirty
	I1009 19:04:56.846160  161014 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 19:04:56.846166  161014 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 19:04:56.846172  161014 command_runner.go:130] >    Compiler:       gc
	I1009 19:04:56.846179  161014 command_runner.go:130] >    Platform:       linux/amd64
	I1009 19:04:56.846185  161014 command_runner.go:130] >    Linkmode:       static
	I1009 19:04:56.846193  161014 command_runner.go:130] >    BuildTags:
	I1009 19:04:56.846202  161014 command_runner.go:130] >      static
	I1009 19:04:56.846209  161014 command_runner.go:130] >      netgo
	I1009 19:04:56.846218  161014 command_runner.go:130] >      osusergo
	I1009 19:04:56.846226  161014 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 19:04:56.846238  161014 command_runner.go:130] >      seccomp
	I1009 19:04:56.846246  161014 command_runner.go:130] >      apparmor
	I1009 19:04:56.846252  161014 command_runner.go:130] >      selinux
	I1009 19:04:56.846262  161014 command_runner.go:130] >    LDFlags:          unknown
	I1009 19:04:56.846270  161014 command_runner.go:130] >    SeccompEnabled:   true
	I1009 19:04:56.846280  161014 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 19:04:56.849910  161014 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:04:56.851471  161014 cli_runner.go:164] Run: docker network inspect functional-158523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:04:56.867982  161014 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:04:56.872517  161014 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1009 19:04:56.872627  161014 kubeadm.go:883] updating cluster {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:04:56.872731  161014 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:04:56.872790  161014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:04:56.904568  161014 command_runner.go:130] > {
	I1009 19:04:56.904591  161014 command_runner.go:130] >   "images":  [
	I1009 19:04:56.904595  161014 command_runner.go:130] >     {
	I1009 19:04:56.904603  161014 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 19:04:56.904608  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904617  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 19:04:56.904622  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904628  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.904652  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 19:04:56.904667  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 19:04:56.904673  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904681  161014 command_runner.go:130] >       "size":  "109379124",
	I1009 19:04:56.904688  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.904694  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.904700  161014 command_runner.go:130] >     },
	I1009 19:04:56.904706  161014 command_runner.go:130] >     {
	I1009 19:04:56.904719  161014 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 19:04:56.904728  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904736  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 19:04:56.904744  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904754  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.904771  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 19:04:56.904786  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 19:04:56.904794  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904799  161014 command_runner.go:130] >       "size":  "31470524",
	I1009 19:04:56.904805  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.904814  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.904822  161014 command_runner.go:130] >     },
	I1009 19:04:56.904831  161014 command_runner.go:130] >     {
	I1009 19:04:56.904841  161014 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 19:04:56.904851  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904861  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 19:04:56.904870  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904879  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.904890  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 19:04:56.904903  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 19:04:56.904912  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904919  161014 command_runner.go:130] >       "size":  "76103547",
	I1009 19:04:56.904928  161014 command_runner.go:130] >       "username":  "nonroot",
	I1009 19:04:56.904938  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.904946  161014 command_runner.go:130] >     },
	I1009 19:04:56.904951  161014 command_runner.go:130] >     {
	I1009 19:04:56.904963  161014 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 19:04:56.904972  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904982  161014 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 19:04:56.904988  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904994  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905015  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 19:04:56.905029  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 19:04:56.905038  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905048  161014 command_runner.go:130] >       "size":  "195976448",
	I1009 19:04:56.905056  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905062  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905071  161014 command_runner.go:130] >       },
	I1009 19:04:56.905082  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905092  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905096  161014 command_runner.go:130] >     },
	I1009 19:04:56.905099  161014 command_runner.go:130] >     {
	I1009 19:04:56.905111  161014 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 19:04:56.905120  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905128  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 19:04:56.905137  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905147  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905160  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 19:04:56.905174  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 19:04:56.905182  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905188  161014 command_runner.go:130] >       "size":  "89046001",
	I1009 19:04:56.905195  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905199  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905207  161014 command_runner.go:130] >       },
	I1009 19:04:56.905218  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905228  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905235  161014 command_runner.go:130] >     },
	I1009 19:04:56.905240  161014 command_runner.go:130] >     {
	I1009 19:04:56.905253  161014 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 19:04:56.905262  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905273  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 19:04:56.905280  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905284  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905299  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 19:04:56.905315  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 19:04:56.905324  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905333  161014 command_runner.go:130] >       "size":  "76004181",
	I1009 19:04:56.905342  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905352  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905360  161014 command_runner.go:130] >       },
	I1009 19:04:56.905367  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905393  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905402  161014 command_runner.go:130] >     },
	I1009 19:04:56.905407  161014 command_runner.go:130] >     {
	I1009 19:04:56.905417  161014 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 19:04:56.905427  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905438  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 19:04:56.905446  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905456  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905470  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 19:04:56.905482  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 19:04:56.905490  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905500  161014 command_runner.go:130] >       "size":  "73138073",
	I1009 19:04:56.905510  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905516  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905525  161014 command_runner.go:130] >     },
	I1009 19:04:56.905533  161014 command_runner.go:130] >     {
	I1009 19:04:56.905543  161014 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 19:04:56.905552  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905563  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 19:04:56.905571  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905579  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905590  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 19:04:56.905613  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 19:04:56.905622  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905629  161014 command_runner.go:130] >       "size":  "53844823",
	I1009 19:04:56.905637  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905647  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905655  161014 command_runner.go:130] >       },
	I1009 19:04:56.905664  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905673  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905681  161014 command_runner.go:130] >     },
	I1009 19:04:56.905690  161014 command_runner.go:130] >     {
	I1009 19:04:56.905696  161014 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 19:04:56.905705  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905712  161014 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 19:04:56.905721  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905727  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905740  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 19:04:56.905754  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 19:04:56.905762  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905772  161014 command_runner.go:130] >       "size":  "742092",
	I1009 19:04:56.905783  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905791  161014 command_runner.go:130] >         "value":  "65535"
	I1009 19:04:56.905795  161014 command_runner.go:130] >       },
	I1009 19:04:56.905802  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905808  161014 command_runner.go:130] >       "pinned":  true
	I1009 19:04:56.905816  161014 command_runner.go:130] >     }
	I1009 19:04:56.905822  161014 command_runner.go:130] >   ]
	I1009 19:04:56.905830  161014 command_runner.go:130] > }
	I1009 19:04:56.906014  161014 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:04:56.906027  161014 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:04:56.906079  161014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:04:56.933720  161014 command_runner.go:130] > {
	I1009 19:04:56.933747  161014 command_runner.go:130] >   "images":  [
	I1009 19:04:56.933753  161014 command_runner.go:130] >     {
	I1009 19:04:56.933769  161014 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 19:04:56.933774  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.933781  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 19:04:56.933788  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933794  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.933805  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 19:04:56.933821  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 19:04:56.933827  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933835  161014 command_runner.go:130] >       "size":  "109379124",
	I1009 19:04:56.933845  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.933855  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.933861  161014 command_runner.go:130] >     },
	I1009 19:04:56.933864  161014 command_runner.go:130] >     {
	I1009 19:04:56.933873  161014 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 19:04:56.933879  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.933890  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 19:04:56.933899  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933906  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.933921  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 19:04:56.933935  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 19:04:56.933944  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933951  161014 command_runner.go:130] >       "size":  "31470524",
	I1009 19:04:56.933960  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.933970  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.933975  161014 command_runner.go:130] >     },
	I1009 19:04:56.933979  161014 command_runner.go:130] >     {
	I1009 19:04:56.933992  161014 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 19:04:56.934002  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934016  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 19:04:56.934029  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934036  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934050  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 19:04:56.934065  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 19:04:56.934072  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934079  161014 command_runner.go:130] >       "size":  "76103547",
	I1009 19:04:56.934086  161014 command_runner.go:130] >       "username":  "nonroot",
	I1009 19:04:56.934090  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934097  161014 command_runner.go:130] >     },
	I1009 19:04:56.934102  161014 command_runner.go:130] >     {
	I1009 19:04:56.934116  161014 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 19:04:56.934126  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934137  161014 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 19:04:56.934145  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934151  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934164  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 19:04:56.934177  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 19:04:56.934183  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934188  161014 command_runner.go:130] >       "size":  "195976448",
	I1009 19:04:56.934197  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934207  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934216  161014 command_runner.go:130] >       },
	I1009 19:04:56.934263  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934275  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934279  161014 command_runner.go:130] >     },
	I1009 19:04:56.934283  161014 command_runner.go:130] >     {
	I1009 19:04:56.934296  161014 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 19:04:56.934306  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934315  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 19:04:56.934323  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934329  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934344  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 19:04:56.934358  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 19:04:56.934372  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934397  161014 command_runner.go:130] >       "size":  "89046001",
	I1009 19:04:56.934408  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934416  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934425  161014 command_runner.go:130] >       },
	I1009 19:04:56.934435  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934444  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934452  161014 command_runner.go:130] >     },
	I1009 19:04:56.934461  161014 command_runner.go:130] >     {
	I1009 19:04:56.934473  161014 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 19:04:56.934480  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934486  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 19:04:56.934493  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934499  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934514  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 19:04:56.934529  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 19:04:56.934538  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934545  161014 command_runner.go:130] >       "size":  "76004181",
	I1009 19:04:56.934554  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934560  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934566  161014 command_runner.go:130] >       },
	I1009 19:04:56.934572  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934578  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934581  161014 command_runner.go:130] >     },
	I1009 19:04:56.934584  161014 command_runner.go:130] >     {
	I1009 19:04:56.934592  161014 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 19:04:56.934597  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934605  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 19:04:56.934610  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934616  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934629  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 19:04:56.934643  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 19:04:56.934652  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934660  161014 command_runner.go:130] >       "size":  "73138073",
	I1009 19:04:56.934667  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934677  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934681  161014 command_runner.go:130] >     },
	I1009 19:04:56.934684  161014 command_runner.go:130] >     {
	I1009 19:04:56.934690  161014 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 19:04:56.934696  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934704  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 19:04:56.934709  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934716  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934726  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 19:04:56.934747  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 19:04:56.934753  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934772  161014 command_runner.go:130] >       "size":  "53844823",
	I1009 19:04:56.934779  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934786  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934795  161014 command_runner.go:130] >       },
	I1009 19:04:56.934801  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934811  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934816  161014 command_runner.go:130] >     },
	I1009 19:04:56.934824  161014 command_runner.go:130] >     {
	I1009 19:04:56.934834  161014 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 19:04:56.934843  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934850  161014 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 19:04:56.934858  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934862  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934871  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 19:04:56.934886  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 19:04:56.934895  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934902  161014 command_runner.go:130] >       "size":  "742092",
	I1009 19:04:56.934910  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934917  161014 command_runner.go:130] >         "value":  "65535"
	I1009 19:04:56.934926  161014 command_runner.go:130] >       },
	I1009 19:04:56.934934  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934943  161014 command_runner.go:130] >       "pinned":  true
	I1009 19:04:56.934947  161014 command_runner.go:130] >     }
	I1009 19:04:56.934950  161014 command_runner.go:130] >   ]
	I1009 19:04:56.934953  161014 command_runner.go:130] > }
	I1009 19:04:56.935095  161014 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:04:56.935110  161014 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:04:56.935118  161014 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 19:04:56.935242  161014 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-158523 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:04:56.935323  161014 ssh_runner.go:195] Run: crio config
	I1009 19:04:56.978304  161014 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1009 19:04:56.978336  161014 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1009 19:04:56.978345  161014 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1009 19:04:56.978350  161014 command_runner.go:130] > #
	I1009 19:04:56.978359  161014 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1009 19:04:56.978367  161014 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1009 19:04:56.978390  161014 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1009 19:04:56.978401  161014 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1009 19:04:56.978406  161014 command_runner.go:130] > # reload'.
	I1009 19:04:56.978415  161014 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1009 19:04:56.978436  161014 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1009 19:04:56.978448  161014 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1009 19:04:56.978458  161014 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1009 19:04:56.978464  161014 command_runner.go:130] > [crio]
	I1009 19:04:56.978476  161014 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1009 19:04:56.978484  161014 command_runner.go:130] > # containers images, in this directory.
	I1009 19:04:56.978495  161014 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1009 19:04:56.978505  161014 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1009 19:04:56.978514  161014 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1009 19:04:56.978523  161014 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1009 19:04:56.978532  161014 command_runner.go:130] > # imagestore = ""
	I1009 19:04:56.978541  161014 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1009 19:04:56.978554  161014 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1009 19:04:56.978561  161014 command_runner.go:130] > # storage_driver = "overlay"
	I1009 19:04:56.978571  161014 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1009 19:04:56.978581  161014 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1009 19:04:56.978591  161014 command_runner.go:130] > # storage_option = [
	I1009 19:04:56.978596  161014 command_runner.go:130] > # ]
	I1009 19:04:56.978605  161014 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1009 19:04:56.978616  161014 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1009 19:04:56.978623  161014 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1009 19:04:56.978631  161014 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1009 19:04:56.978640  161014 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1009 19:04:56.978647  161014 command_runner.go:130] > # always happen on a node reboot
	I1009 19:04:56.978654  161014 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1009 19:04:56.978669  161014 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1009 19:04:56.978682  161014 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1009 19:04:56.978689  161014 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1009 19:04:56.978695  161014 command_runner.go:130] > # version_file_persist = ""
	I1009 19:04:56.978714  161014 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1009 19:04:56.978728  161014 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1009 19:04:56.978737  161014 command_runner.go:130] > # internal_wipe = true
	I1009 19:04:56.978748  161014 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1009 19:04:56.978760  161014 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1009 19:04:56.978772  161014 command_runner.go:130] > # internal_repair = true
	I1009 19:04:56.978780  161014 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1009 19:04:56.978794  161014 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1009 19:04:56.978805  161014 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1009 19:04:56.978815  161014 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1009 19:04:56.978825  161014 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1009 19:04:56.978833  161014 command_runner.go:130] > [crio.api]
	I1009 19:04:56.978841  161014 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1009 19:04:56.978851  161014 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1009 19:04:56.978860  161014 command_runner.go:130] > # IP address on which the stream server will listen.
	I1009 19:04:56.978870  161014 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1009 19:04:56.978881  161014 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1009 19:04:56.978892  161014 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1009 19:04:56.978901  161014 command_runner.go:130] > # stream_port = "0"
	I1009 19:04:56.978910  161014 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1009 19:04:56.978920  161014 command_runner.go:130] > # stream_enable_tls = false
	I1009 19:04:56.978929  161014 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1009 19:04:56.978954  161014 command_runner.go:130] > # stream_idle_timeout = ""
	I1009 19:04:56.978969  161014 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1009 19:04:56.978978  161014 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1009 19:04:56.978985  161014 command_runner.go:130] > # stream_tls_cert = ""
	I1009 19:04:56.978999  161014 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1009 19:04:56.979007  161014 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1009 19:04:56.979013  161014 command_runner.go:130] > # stream_tls_key = ""
	I1009 19:04:56.979025  161014 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1009 19:04:56.979039  161014 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1009 19:04:56.979049  161014 command_runner.go:130] > # automatically pick up the changes.
	I1009 19:04:56.979058  161014 command_runner.go:130] > # stream_tls_ca = ""
	I1009 19:04:56.979084  161014 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 19:04:56.979098  161014 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1009 19:04:56.979110  161014 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 19:04:56.979117  161014 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1009 19:04:56.979127  161014 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1009 19:04:56.979134  161014 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1009 19:04:56.979139  161014 command_runner.go:130] > [crio.runtime]
	I1009 19:04:56.979146  161014 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1009 19:04:56.979155  161014 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1009 19:04:56.979163  161014 command_runner.go:130] > # "nofile=1024:2048"
	I1009 19:04:56.979177  161014 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1009 19:04:56.979187  161014 command_runner.go:130] > # default_ulimits = [
	I1009 19:04:56.979193  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979206  161014 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1009 19:04:56.979215  161014 command_runner.go:130] > # no_pivot = false
	I1009 19:04:56.979226  161014 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1009 19:04:56.979239  161014 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1009 19:04:56.979251  161014 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1009 19:04:56.979259  161014 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1009 19:04:56.979267  161014 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1009 19:04:56.979277  161014 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 19:04:56.979283  161014 command_runner.go:130] > # conmon = ""
	I1009 19:04:56.979290  161014 command_runner.go:130] > # Cgroup setting for conmon
	I1009 19:04:56.979301  161014 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1009 19:04:56.979311  161014 command_runner.go:130] > conmon_cgroup = "pod"
	I1009 19:04:56.979320  161014 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1009 19:04:56.979327  161014 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1009 19:04:56.979338  161014 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 19:04:56.979347  161014 command_runner.go:130] > # conmon_env = [
	I1009 19:04:56.979353  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979364  161014 command_runner.go:130] > # Additional environment variables to set for all the
	I1009 19:04:56.979392  161014 command_runner.go:130] > # containers. These are overridden if set in the
	I1009 19:04:56.979406  161014 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1009 19:04:56.979412  161014 command_runner.go:130] > # default_env = [
	I1009 19:04:56.979420  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979429  161014 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1009 19:04:56.979443  161014 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1009 19:04:56.979453  161014 command_runner.go:130] > # selinux = false
	I1009 19:04:56.979463  161014 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1009 19:04:56.979479  161014 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1009 19:04:56.979489  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979497  161014 command_runner.go:130] > # seccomp_profile = ""
	I1009 19:04:56.979509  161014 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1009 19:04:56.979522  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979529  161014 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1009 19:04:56.979542  161014 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1009 19:04:56.979555  161014 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1009 19:04:56.979564  161014 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1009 19:04:56.979574  161014 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1009 19:04:56.979585  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979593  161014 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1009 19:04:56.979605  161014 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1009 19:04:56.979615  161014 command_runner.go:130] > # the cgroup blockio controller.
	I1009 19:04:56.979622  161014 command_runner.go:130] > # blockio_config_file = ""
	I1009 19:04:56.979636  161014 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1009 19:04:56.979642  161014 command_runner.go:130] > # blockio parameters.
	I1009 19:04:56.979648  161014 command_runner.go:130] > # blockio_reload = false
	I1009 19:04:56.979658  161014 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1009 19:04:56.979664  161014 command_runner.go:130] > # irqbalance daemon.
	I1009 19:04:56.979672  161014 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1009 19:04:56.979681  161014 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1009 19:04:56.979690  161014 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1009 19:04:56.979700  161014 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1009 19:04:56.979710  161014 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1009 19:04:56.979724  161014 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1009 19:04:56.979731  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979741  161014 command_runner.go:130] > # rdt_config_file = ""
	I1009 19:04:56.979753  161014 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1009 19:04:56.979764  161014 command_runner.go:130] > # cgroup_manager = "systemd"
	I1009 19:04:56.979773  161014 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1009 19:04:56.979783  161014 command_runner.go:130] > # separate_pull_cgroup = ""
	I1009 19:04:56.979791  161014 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1009 19:04:56.979800  161014 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1009 19:04:56.979809  161014 command_runner.go:130] > # will be added.
	I1009 19:04:56.979817  161014 command_runner.go:130] > # default_capabilities = [
	I1009 19:04:56.979826  161014 command_runner.go:130] > # 	"CHOWN",
	I1009 19:04:56.979832  161014 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1009 19:04:56.979840  161014 command_runner.go:130] > # 	"FSETID",
	I1009 19:04:56.979846  161014 command_runner.go:130] > # 	"FOWNER",
	I1009 19:04:56.979855  161014 command_runner.go:130] > # 	"SETGID",
	I1009 19:04:56.979876  161014 command_runner.go:130] > # 	"SETUID",
	I1009 19:04:56.979885  161014 command_runner.go:130] > # 	"SETPCAP",
	I1009 19:04:56.979891  161014 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1009 19:04:56.979901  161014 command_runner.go:130] > # 	"KILL",
	I1009 19:04:56.979906  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979920  161014 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1009 19:04:56.979930  161014 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1009 19:04:56.979950  161014 command_runner.go:130] > # add_inheritable_capabilities = false
	I1009 19:04:56.979963  161014 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1009 19:04:56.979972  161014 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 19:04:56.979977  161014 command_runner.go:130] > default_sysctls = [
	I1009 19:04:56.979993  161014 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1009 19:04:56.979997  161014 command_runner.go:130] > ]
	I1009 19:04:56.980003  161014 command_runner.go:130] > # List of devices on the host that a
	I1009 19:04:56.980010  161014 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1009 19:04:56.980015  161014 command_runner.go:130] > # allowed_devices = [
	I1009 19:04:56.980019  161014 command_runner.go:130] > # 	"/dev/fuse",
	I1009 19:04:56.980024  161014 command_runner.go:130] > # 	"/dev/net/tun",
	I1009 19:04:56.980029  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980035  161014 command_runner.go:130] > # List of additional devices. specified as
	I1009 19:04:56.980047  161014 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1009 19:04:56.980055  161014 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1009 19:04:56.980063  161014 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 19:04:56.980069  161014 command_runner.go:130] > # additional_devices = [
	I1009 19:04:56.980072  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980079  161014 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1009 19:04:56.980084  161014 command_runner.go:130] > # cdi_spec_dirs = [
	I1009 19:04:56.980091  161014 command_runner.go:130] > # 	"/etc/cdi",
	I1009 19:04:56.980097  161014 command_runner.go:130] > # 	"/var/run/cdi",
	I1009 19:04:56.980101  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980111  161014 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1009 19:04:56.980120  161014 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1009 19:04:56.980126  161014 command_runner.go:130] > # Defaults to false.
	I1009 19:04:56.980133  161014 command_runner.go:130] > # device_ownership_from_security_context = false
	I1009 19:04:56.980146  161014 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1009 19:04:56.980157  161014 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1009 19:04:56.980163  161014 command_runner.go:130] > # hooks_dir = [
	I1009 19:04:56.980167  161014 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1009 19:04:56.980173  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980179  161014 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1009 19:04:56.980187  161014 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1009 19:04:56.980192  161014 command_runner.go:130] > # its default mounts from the following two files:
	I1009 19:04:56.980197  161014 command_runner.go:130] > #
	I1009 19:04:56.980202  161014 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1009 19:04:56.980211  161014 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1009 19:04:56.980218  161014 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1009 19:04:56.980221  161014 command_runner.go:130] > #
	I1009 19:04:56.980230  161014 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1009 19:04:56.980236  161014 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1009 19:04:56.980244  161014 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1009 19:04:56.980252  161014 command_runner.go:130] > #      only add mounts it finds in this file.
	I1009 19:04:56.980255  161014 command_runner.go:130] > #
	I1009 19:04:56.980261  161014 command_runner.go:130] > # default_mounts_file = ""
	I1009 19:04:56.980266  161014 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1009 19:04:56.980275  161014 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1009 19:04:56.980281  161014 command_runner.go:130] > # pids_limit = -1
	I1009 19:04:56.980286  161014 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1009 19:04:56.980294  161014 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1009 19:04:56.980300  161014 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1009 19:04:56.980309  161014 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1009 19:04:56.980315  161014 command_runner.go:130] > # log_size_max = -1
	I1009 19:04:56.980322  161014 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1009 19:04:56.980328  161014 command_runner.go:130] > # log_to_journald = false
	I1009 19:04:56.980335  161014 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1009 19:04:56.980341  161014 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1009 19:04:56.980345  161014 command_runner.go:130] > # Path to directory for container attach sockets.
	I1009 19:04:56.980352  161014 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1009 19:04:56.980357  161014 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1009 19:04:56.980365  161014 command_runner.go:130] > # bind_mount_prefix = ""
	I1009 19:04:56.980370  161014 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1009 19:04:56.980376  161014 command_runner.go:130] > # read_only = false
	I1009 19:04:56.980395  161014 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1009 19:04:56.980405  161014 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1009 19:04:56.980413  161014 command_runner.go:130] > # live configuration reload.
	I1009 19:04:56.980417  161014 command_runner.go:130] > # log_level = "info"
	I1009 19:04:56.980425  161014 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1009 19:04:56.980430  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.980435  161014 command_runner.go:130] > # log_filter = ""
	I1009 19:04:56.980441  161014 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1009 19:04:56.980449  161014 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1009 19:04:56.980455  161014 command_runner.go:130] > # separated by comma.
	I1009 19:04:56.980462  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980467  161014 command_runner.go:130] > # uid_mappings = ""
	I1009 19:04:56.980473  161014 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1009 19:04:56.980480  161014 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1009 19:04:56.980486  161014 command_runner.go:130] > # separated by comma.
	I1009 19:04:56.980496  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980502  161014 command_runner.go:130] > # gid_mappings = ""
	I1009 19:04:56.980508  161014 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1009 19:04:56.980516  161014 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 19:04:56.980524  161014 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 19:04:56.980534  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980540  161014 command_runner.go:130] > # minimum_mappable_uid = -1
	I1009 19:04:56.980547  161014 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1009 19:04:56.980556  161014 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 19:04:56.980562  161014 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 19:04:56.980569  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980575  161014 command_runner.go:130] > # minimum_mappable_gid = -1
	I1009 19:04:56.980581  161014 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1009 19:04:56.980588  161014 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1009 19:04:56.980593  161014 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1009 19:04:56.980599  161014 command_runner.go:130] > # ctr_stop_timeout = 30
	I1009 19:04:56.980605  161014 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1009 19:04:56.980612  161014 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1009 19:04:56.980616  161014 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1009 19:04:56.980623  161014 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1009 19:04:56.980627  161014 command_runner.go:130] > # drop_infra_ctr = true
	I1009 19:04:56.980635  161014 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1009 19:04:56.980640  161014 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1009 19:04:56.980649  161014 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1009 19:04:56.980657  161014 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1009 19:04:56.980666  161014 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1009 19:04:56.980674  161014 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1009 19:04:56.980682  161014 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1009 19:04:56.980687  161014 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1009 19:04:56.980695  161014 command_runner.go:130] > # shared_cpuset = ""
	I1009 19:04:56.980703  161014 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1009 19:04:56.980707  161014 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1009 19:04:56.980712  161014 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1009 19:04:56.980719  161014 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1009 19:04:56.980725  161014 command_runner.go:130] > # pinns_path = ""
	I1009 19:04:56.980730  161014 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1009 19:04:56.980738  161014 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1009 19:04:56.980742  161014 command_runner.go:130] > # enable_criu_support = true
	I1009 19:04:56.980749  161014 command_runner.go:130] > # Enable/disable the generation of the container,
	I1009 19:04:56.980754  161014 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1009 19:04:56.980761  161014 command_runner.go:130] > # enable_pod_events = false
	I1009 19:04:56.980767  161014 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 19:04:56.980775  161014 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1009 19:04:56.980779  161014 command_runner.go:130] > # default_runtime = "crun"
	I1009 19:04:56.980785  161014 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1009 19:04:56.980792  161014 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1009 19:04:56.980803  161014 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1009 19:04:56.980809  161014 command_runner.go:130] > # creation as a file is not desired either.
	I1009 19:04:56.980817  161014 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1009 19:04:56.980823  161014 command_runner.go:130] > # the hostname is being managed dynamically.
	I1009 19:04:56.980828  161014 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1009 19:04:56.980831  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980836  161014 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1009 19:04:56.980844  161014 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1009 19:04:56.980850  161014 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1009 19:04:56.980858  161014 command_runner.go:130] > # Each entry in the table should follow the format:
	I1009 19:04:56.980861  161014 command_runner.go:130] > #
	I1009 19:04:56.980865  161014 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1009 19:04:56.980872  161014 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1009 19:04:56.980875  161014 command_runner.go:130] > # runtime_type = "oci"
	I1009 19:04:56.980882  161014 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1009 19:04:56.980887  161014 command_runner.go:130] > # inherit_default_runtime = false
	I1009 19:04:56.980894  161014 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1009 19:04:56.980898  161014 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1009 19:04:56.980902  161014 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1009 19:04:56.980906  161014 command_runner.go:130] > # monitor_env = []
	I1009 19:04:56.980910  161014 command_runner.go:130] > # privileged_without_host_devices = false
	I1009 19:04:56.980917  161014 command_runner.go:130] > # allowed_annotations = []
	I1009 19:04:56.980922  161014 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1009 19:04:56.980928  161014 command_runner.go:130] > # no_sync_log = false
	I1009 19:04:56.980932  161014 command_runner.go:130] > # default_annotations = {}
	I1009 19:04:56.980939  161014 command_runner.go:130] > # stream_websockets = false
	I1009 19:04:56.980949  161014 command_runner.go:130] > # seccomp_profile = ""
	I1009 19:04:56.980985  161014 command_runner.go:130] > # Where:
	I1009 19:04:56.980994  161014 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1009 19:04:56.980999  161014 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1009 19:04:56.981005  161014 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1009 19:04:56.981010  161014 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1009 19:04:56.981014  161014 command_runner.go:130] > #   in $PATH.
	I1009 19:04:56.981020  161014 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1009 19:04:56.981024  161014 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1009 19:04:56.981032  161014 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1009 19:04:56.981035  161014 command_runner.go:130] > #   state.
	I1009 19:04:56.981041  161014 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1009 19:04:56.981049  161014 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1009 19:04:56.981054  161014 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1009 19:04:56.981063  161014 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1009 19:04:56.981067  161014 command_runner.go:130] > #   the values from the default runtime on load time.
	I1009 19:04:56.981078  161014 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1009 19:04:56.981086  161014 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1009 19:04:56.981092  161014 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1009 19:04:56.981100  161014 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1009 19:04:56.981105  161014 command_runner.go:130] > #   The currently recognized values are:
	I1009 19:04:56.981113  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1009 19:04:56.981123  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1009 19:04:56.981130  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1009 19:04:56.981135  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1009 19:04:56.981144  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1009 19:04:56.981153  161014 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1009 19:04:56.981161  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1009 19:04:56.981169  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1009 19:04:56.981177  161014 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1009 19:04:56.981183  161014 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1009 19:04:56.981191  161014 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1009 19:04:56.981199  161014 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1009 19:04:56.981204  161014 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1009 19:04:56.981213  161014 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1009 19:04:56.981221  161014 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1009 19:04:56.981227  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1009 19:04:56.981235  161014 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1009 19:04:56.981239  161014 command_runner.go:130] > #   deprecated option "conmon".
	I1009 19:04:56.981248  161014 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1009 19:04:56.981255  161014 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1009 19:04:56.981261  161014 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1009 19:04:56.981268  161014 command_runner.go:130] > #   should be moved to the container's cgroup
	I1009 19:04:56.981273  161014 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1009 19:04:56.981280  161014 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1009 19:04:56.981287  161014 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1009 19:04:56.981293  161014 command_runner.go:130] > #   conmon-rs by using:
	I1009 19:04:56.981300  161014 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1009 19:04:56.981309  161014 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1009 19:04:56.981318  161014 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1009 19:04:56.981326  161014 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1009 19:04:56.981334  161014 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1009 19:04:56.981341  161014 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1009 19:04:56.981351  161014 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1009 19:04:56.981359  161014 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1009 19:04:56.981370  161014 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1009 19:04:56.981395  161014 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1009 19:04:56.981405  161014 command_runner.go:130] > #   when a machine crash happens.
	I1009 19:04:56.981411  161014 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1009 19:04:56.981421  161014 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1009 19:04:56.981431  161014 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1009 19:04:56.981437  161014 command_runner.go:130] > #   seccomp profile for the runtime.
	I1009 19:04:56.981443  161014 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1009 19:04:56.981452  161014 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1009 19:04:56.981455  161014 command_runner.go:130] > #
	I1009 19:04:56.981460  161014 command_runner.go:130] > # Using the seccomp notifier feature:
	I1009 19:04:56.981465  161014 command_runner.go:130] > #
	I1009 19:04:56.981472  161014 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1009 19:04:56.981480  161014 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1009 19:04:56.981483  161014 command_runner.go:130] > #
	I1009 19:04:56.981490  161014 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1009 19:04:56.981498  161014 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1009 19:04:56.981501  161014 command_runner.go:130] > #
	I1009 19:04:56.981507  161014 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1009 19:04:56.981512  161014 command_runner.go:130] > # feature.
	I1009 19:04:56.981515  161014 command_runner.go:130] > #
	I1009 19:04:56.981537  161014 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1009 19:04:56.981545  161014 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1009 19:04:56.981553  161014 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1009 19:04:56.981562  161014 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1009 19:04:56.981568  161014 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1009 19:04:56.981573  161014 command_runner.go:130] > #
	I1009 19:04:56.981579  161014 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1009 19:04:56.981587  161014 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1009 19:04:56.981590  161014 command_runner.go:130] > #
	I1009 19:04:56.981598  161014 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1009 19:04:56.981603  161014 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1009 19:04:56.981608  161014 command_runner.go:130] > #
	I1009 19:04:56.981614  161014 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1009 19:04:56.981622  161014 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1009 19:04:56.981628  161014 command_runner.go:130] > # limitation.
	I1009 19:04:56.981632  161014 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1009 19:04:56.981639  161014 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1009 19:04:56.981642  161014 command_runner.go:130] > runtime_type = ""
	I1009 19:04:56.981648  161014 command_runner.go:130] > runtime_root = "/run/crun"
	I1009 19:04:56.981652  161014 command_runner.go:130] > inherit_default_runtime = false
	I1009 19:04:56.981657  161014 command_runner.go:130] > runtime_config_path = ""
	I1009 19:04:56.981663  161014 command_runner.go:130] > container_min_memory = ""
	I1009 19:04:56.981667  161014 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 19:04:56.981673  161014 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 19:04:56.981677  161014 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 19:04:56.981683  161014 command_runner.go:130] > allowed_annotations = [
	I1009 19:04:56.981687  161014 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1009 19:04:56.981694  161014 command_runner.go:130] > ]
	I1009 19:04:56.981699  161014 command_runner.go:130] > privileged_without_host_devices = false
	I1009 19:04:56.981705  161014 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1009 19:04:56.981709  161014 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1009 19:04:56.981715  161014 command_runner.go:130] > runtime_type = ""
	I1009 19:04:56.981719  161014 command_runner.go:130] > runtime_root = "/run/runc"
	I1009 19:04:56.981725  161014 command_runner.go:130] > inherit_default_runtime = false
	I1009 19:04:56.981729  161014 command_runner.go:130] > runtime_config_path = ""
	I1009 19:04:56.981735  161014 command_runner.go:130] > container_min_memory = ""
	I1009 19:04:56.981739  161014 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 19:04:56.981744  161014 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 19:04:56.981750  161014 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 19:04:56.981754  161014 command_runner.go:130] > privileged_without_host_devices = false
	I1009 19:04:56.981761  161014 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1009 19:04:56.981769  161014 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1009 19:04:56.981774  161014 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1009 19:04:56.981783  161014 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1009 19:04:56.981795  161014 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1009 19:04:56.981807  161014 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1009 19:04:56.981815  161014 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1009 19:04:56.981823  161014 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1009 19:04:56.981831  161014 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1009 19:04:56.981840  161014 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1009 19:04:56.981848  161014 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1009 19:04:56.981854  161014 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1009 19:04:56.981859  161014 command_runner.go:130] > # Example:
	I1009 19:04:56.981864  161014 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1009 19:04:56.981871  161014 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1009 19:04:56.981875  161014 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1009 19:04:56.981884  161014 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1009 19:04:56.981899  161014 command_runner.go:130] > # cpuset = "0-1"
	I1009 19:04:56.981905  161014 command_runner.go:130] > # cpushares = "5"
	I1009 19:04:56.981909  161014 command_runner.go:130] > # cpuquota = "1000"
	I1009 19:04:56.981912  161014 command_runner.go:130] > # cpuperiod = "100000"
	I1009 19:04:56.981920  161014 command_runner.go:130] > # cpulimit = "35"
	I1009 19:04:56.981926  161014 command_runner.go:130] > # Where:
	I1009 19:04:56.981936  161014 command_runner.go:130] > # The workload name is workload-type.
	I1009 19:04:56.981948  161014 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1009 19:04:56.981955  161014 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1009 19:04:56.981962  161014 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1009 19:04:56.981971  161014 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1009 19:04:56.981979  161014 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1009 19:04:56.981984  161014 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1009 19:04:56.981993  161014 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1009 19:04:56.981997  161014 command_runner.go:130] > # Default value is set to true
	I1009 19:04:56.982003  161014 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1009 19:04:56.982009  161014 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1009 19:04:56.982013  161014 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1009 19:04:56.982017  161014 command_runner.go:130] > # Default value is set to 'false'
	I1009 19:04:56.982020  161014 command_runner.go:130] > # disable_hostport_mapping = false
	I1009 19:04:56.982025  161014 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1009 19:04:56.982034  161014 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1009 19:04:56.982039  161014 command_runner.go:130] > # timezone = ""
	I1009 19:04:56.982045  161014 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1009 19:04:56.982050  161014 command_runner.go:130] > #
	I1009 19:04:56.982056  161014 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1009 19:04:56.982064  161014 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1009 19:04:56.982067  161014 command_runner.go:130] > [crio.image]
	I1009 19:04:56.982072  161014 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1009 19:04:56.982080  161014 command_runner.go:130] > # default_transport = "docker://"
	I1009 19:04:56.982085  161014 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1009 19:04:56.982093  161014 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1009 19:04:56.982100  161014 command_runner.go:130] > # global_auth_file = ""
	I1009 19:04:56.982105  161014 command_runner.go:130] > # The image used to instantiate infra containers.
	I1009 19:04:56.982112  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.982116  161014 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1009 19:04:56.982124  161014 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1009 19:04:56.982132  161014 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1009 19:04:56.982137  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.982143  161014 command_runner.go:130] > # pause_image_auth_file = ""
	I1009 19:04:56.982148  161014 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1009 19:04:56.982156  161014 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1009 19:04:56.982162  161014 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1009 19:04:56.982170  161014 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1009 19:04:56.982173  161014 command_runner.go:130] > # pause_command = "/pause"
	I1009 19:04:56.982178  161014 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1009 19:04:56.982186  161014 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1009 19:04:56.982191  161014 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1009 19:04:56.982199  161014 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1009 19:04:56.982204  161014 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1009 19:04:56.982213  161014 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1009 19:04:56.982219  161014 command_runner.go:130] > # pinned_images = [
	I1009 19:04:56.982222  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982227  161014 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1009 19:04:56.982235  161014 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1009 19:04:56.982241  161014 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1009 19:04:56.982248  161014 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1009 19:04:56.982253  161014 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1009 19:04:56.982260  161014 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1009 19:04:56.982265  161014 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1009 19:04:56.982274  161014 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1009 19:04:56.982282  161014 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1009 19:04:56.982287  161014 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1009 19:04:56.982295  161014 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1009 19:04:56.982302  161014 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1009 19:04:56.982307  161014 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1009 19:04:56.982316  161014 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1009 19:04:56.982322  161014 command_runner.go:130] > # changing them here.
	I1009 19:04:56.982327  161014 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1009 19:04:56.982333  161014 command_runner.go:130] > # insecure_registries = [
	I1009 19:04:56.982336  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982342  161014 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1009 19:04:56.982352  161014 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1009 19:04:56.982359  161014 command_runner.go:130] > # image_volumes = "mkdir"
	I1009 19:04:56.982364  161014 command_runner.go:130] > # Temporary directory to use for storing big files
	I1009 19:04:56.982370  161014 command_runner.go:130] > # big_files_temporary_dir = ""
	I1009 19:04:56.982385  161014 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1009 19:04:56.982398  161014 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1009 19:04:56.982403  161014 command_runner.go:130] > # auto_reload_registries = false
	I1009 19:04:56.982412  161014 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1009 19:04:56.982419  161014 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1009 19:04:56.982427  161014 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1009 19:04:56.982431  161014 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1009 19:04:56.982435  161014 command_runner.go:130] > # The mode of short name resolution.
	I1009 19:04:56.982441  161014 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1009 19:04:56.982450  161014 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1009 19:04:56.982455  161014 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1009 19:04:56.982460  161014 command_runner.go:130] > # short_name_mode = "enforcing"
	I1009 19:04:56.982465  161014 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1009 19:04:56.982472  161014 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1009 19:04:56.982476  161014 command_runner.go:130] > # oci_artifact_mount_support = true
	I1009 19:04:56.982484  161014 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1009 19:04:56.982487  161014 command_runner.go:130] > # CNI plugins.
	I1009 19:04:56.982490  161014 command_runner.go:130] > [crio.network]
	I1009 19:04:56.982496  161014 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1009 19:04:56.982501  161014 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1009 19:04:56.982507  161014 command_runner.go:130] > # cni_default_network = ""
	I1009 19:04:56.982512  161014 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1009 19:04:56.982519  161014 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1009 19:04:56.982524  161014 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1009 19:04:56.982530  161014 command_runner.go:130] > # plugin_dirs = [
	I1009 19:04:56.982533  161014 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1009 19:04:56.982536  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982540  161014 command_runner.go:130] > # List of included pod metrics.
	I1009 19:04:56.982544  161014 command_runner.go:130] > # included_pod_metrics = [
	I1009 19:04:56.982547  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982552  161014 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1009 19:04:56.982558  161014 command_runner.go:130] > [crio.metrics]
	I1009 19:04:56.982562  161014 command_runner.go:130] > # Globally enable or disable metrics support.
	I1009 19:04:56.982566  161014 command_runner.go:130] > # enable_metrics = false
	I1009 19:04:56.982570  161014 command_runner.go:130] > # Specify enabled metrics collectors.
	I1009 19:04:56.982574  161014 command_runner.go:130] > # Per default all metrics are enabled.
	I1009 19:04:56.982579  161014 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1009 19:04:56.982588  161014 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1009 19:04:56.982593  161014 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1009 19:04:56.982598  161014 command_runner.go:130] > # metrics_collectors = [
	I1009 19:04:56.982602  161014 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1009 19:04:56.982607  161014 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1009 19:04:56.982610  161014 command_runner.go:130] > # 	"containers_oom_total",
	I1009 19:04:56.982614  161014 command_runner.go:130] > # 	"processes_defunct",
	I1009 19:04:56.982617  161014 command_runner.go:130] > # 	"operations_total",
	I1009 19:04:56.982621  161014 command_runner.go:130] > # 	"operations_latency_seconds",
	I1009 19:04:56.982625  161014 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1009 19:04:56.982629  161014 command_runner.go:130] > # 	"operations_errors_total",
	I1009 19:04:56.982632  161014 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1009 19:04:56.982636  161014 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1009 19:04:56.982640  161014 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1009 19:04:56.982643  161014 command_runner.go:130] > # 	"image_pulls_success_total",
	I1009 19:04:56.982648  161014 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1009 19:04:56.982652  161014 command_runner.go:130] > # 	"containers_oom_count_total",
	I1009 19:04:56.982656  161014 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1009 19:04:56.982660  161014 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1009 19:04:56.982664  161014 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1009 19:04:56.982667  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982672  161014 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1009 19:04:56.982675  161014 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1009 19:04:56.982680  161014 command_runner.go:130] > # The port on which the metrics server will listen.
	I1009 19:04:56.982683  161014 command_runner.go:130] > # metrics_port = 9090
	I1009 19:04:56.982689  161014 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1009 19:04:56.982693  161014 command_runner.go:130] > # metrics_socket = ""
	I1009 19:04:56.982698  161014 command_runner.go:130] > # The certificate for the secure metrics server.
	I1009 19:04:56.982706  161014 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1009 19:04:56.982712  161014 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1009 19:04:56.982718  161014 command_runner.go:130] > # certificate on any modification event.
	I1009 19:04:56.982722  161014 command_runner.go:130] > # metrics_cert = ""
	I1009 19:04:56.982735  161014 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1009 19:04:56.982741  161014 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1009 19:04:56.982746  161014 command_runner.go:130] > # metrics_key = ""
	I1009 19:04:56.982753  161014 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1009 19:04:56.982758  161014 command_runner.go:130] > [crio.tracing]
	I1009 19:04:56.982766  161014 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1009 19:04:56.982771  161014 command_runner.go:130] > # enable_tracing = false
	I1009 19:04:56.982779  161014 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1009 19:04:56.982788  161014 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1009 19:04:56.982798  161014 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1009 19:04:56.982809  161014 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1009 19:04:56.982818  161014 command_runner.go:130] > # CRI-O NRI configuration.
	I1009 19:04:56.982821  161014 command_runner.go:130] > [crio.nri]
	I1009 19:04:56.982825  161014 command_runner.go:130] > # Globally enable or disable NRI.
	I1009 19:04:56.982832  161014 command_runner.go:130] > # enable_nri = true
	I1009 19:04:56.982836  161014 command_runner.go:130] > # NRI socket to listen on.
	I1009 19:04:56.982842  161014 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1009 19:04:56.982846  161014 command_runner.go:130] > # NRI plugin directory to use.
	I1009 19:04:56.982851  161014 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1009 19:04:56.982856  161014 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1009 19:04:56.982863  161014 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1009 19:04:56.982868  161014 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1009 19:04:56.982900  161014 command_runner.go:130] > # nri_disable_connections = false
	I1009 19:04:56.982908  161014 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1009 19:04:56.982912  161014 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1009 19:04:56.982916  161014 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1009 19:04:56.982920  161014 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1009 19:04:56.982926  161014 command_runner.go:130] > # NRI default validator configuration.
	I1009 19:04:56.982933  161014 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1009 19:04:56.982946  161014 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1009 19:04:56.982953  161014 command_runner.go:130] > # can be restricted/rejected:
	I1009 19:04:56.982956  161014 command_runner.go:130] > # - OCI hook injection
	I1009 19:04:56.982961  161014 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1009 19:04:56.982969  161014 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1009 19:04:56.982974  161014 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1009 19:04:56.982982  161014 command_runner.go:130] > # - adjustment of linux namespaces
	I1009 19:04:56.982988  161014 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1009 19:04:56.982996  161014 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1009 19:04:56.983002  161014 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1009 19:04:56.983007  161014 command_runner.go:130] > #
	I1009 19:04:56.983011  161014 command_runner.go:130] > # [crio.nri.default_validator]
	I1009 19:04:56.983015  161014 command_runner.go:130] > # nri_enable_default_validator = false
	I1009 19:04:56.983020  161014 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1009 19:04:56.983027  161014 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1009 19:04:56.983032  161014 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1009 19:04:56.983039  161014 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1009 19:04:56.983044  161014 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1009 19:04:56.983050  161014 command_runner.go:130] > # nri_validator_required_plugins = [
	I1009 19:04:56.983053  161014 command_runner.go:130] > # ]
	I1009 19:04:56.983058  161014 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1009 19:04:56.983066  161014 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1009 19:04:56.983069  161014 command_runner.go:130] > [crio.stats]
	I1009 19:04:56.983074  161014 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1009 19:04:56.983087  161014 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1009 19:04:56.983092  161014 command_runner.go:130] > # stats_collection_period = 0
	I1009 19:04:56.983097  161014 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1009 19:04:56.983106  161014 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1009 19:04:56.983109  161014 command_runner.go:130] > # collection_period = 0
	I1009 19:04:56.983133  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961902946Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1009 19:04:56.983143  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961928249Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1009 19:04:56.983151  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961952575Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1009 19:04:56.983160  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961969788Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1009 19:04:56.983168  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.962036562Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.983178  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.96221376Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1009 19:04:56.983187  161014 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1009 19:04:56.983250  161014 cni.go:84] Creating CNI manager for ""
	I1009 19:04:56.983259  161014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:04:56.983280  161014 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:04:56.983306  161014 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-158523 NodeName:functional-158523 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:04:56.983442  161014 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-158523"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:04:56.983504  161014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:04:56.992256  161014 command_runner.go:130] > kubeadm
	I1009 19:04:56.992278  161014 command_runner.go:130] > kubectl
	I1009 19:04:56.992282  161014 command_runner.go:130] > kubelet
	I1009 19:04:56.992304  161014 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:04:56.992347  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:04:57.000522  161014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 19:04:57.013113  161014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:04:57.026211  161014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1009 19:04:57.038776  161014 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:04:57.042573  161014 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1009 19:04:57.042649  161014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:04:57.130268  161014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:04:57.143785  161014 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523 for IP: 192.168.49.2
	I1009 19:04:57.143808  161014 certs.go:195] generating shared ca certs ...
	I1009 19:04:57.143829  161014 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.144031  161014 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:04:57.144072  161014 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:04:57.144082  161014 certs.go:257] generating profile certs ...
	I1009 19:04:57.144182  161014 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key
	I1009 19:04:57.144224  161014 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key.1809350a
	I1009 19:04:57.144260  161014 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key
	I1009 19:04:57.144272  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:04:57.144283  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:04:57.144293  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:04:57.144302  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:04:57.144314  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:04:57.144325  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:04:57.144336  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:04:57.144348  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:04:57.144426  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:04:57.144461  161014 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:04:57.144470  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:04:57.144493  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:04:57.144516  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:04:57.144537  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:04:57.144579  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:04:57.144605  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.144619  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.144631  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.145144  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:04:57.163977  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:04:57.182180  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:04:57.200741  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:04:57.219086  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:04:57.236775  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:04:57.254529  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:04:57.272276  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:04:57.290804  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:04:57.309893  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:04:57.327963  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:04:57.345810  161014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:04:57.359185  161014 ssh_runner.go:195] Run: openssl version
	I1009 19:04:57.366137  161014 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1009 19:04:57.366338  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:04:57.375985  161014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.380041  161014 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.380082  161014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.380117  161014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.415315  161014 command_runner.go:130] > b5213941
	I1009 19:04:57.415413  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:04:57.424315  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:04:57.433300  161014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.437553  161014 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.437594  161014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.437635  161014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.472859  161014 command_runner.go:130] > 51391683
	I1009 19:04:57.473177  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:04:57.481800  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:04:57.490997  161014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.494992  161014 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.495040  161014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.495095  161014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.529155  161014 command_runner.go:130] > 3ec20f2e
	I1009 19:04:57.529240  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:04:57.537710  161014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:04:57.541624  161014 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:04:57.541645  161014 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1009 19:04:57.541653  161014 command_runner.go:130] > Device: 8,1	Inode: 573939      Links: 1
	I1009 19:04:57.541662  161014 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 19:04:57.541679  161014 command_runner.go:130] > Access: 2025-10-09 19:00:49.271404553 +0000
	I1009 19:04:57.541690  161014 command_runner.go:130] > Modify: 2025-10-09 18:56:44.405307509 +0000
	I1009 19:04:57.541704  161014 command_runner.go:130] > Change: 2025-10-09 18:56:44.405307509 +0000
	I1009 19:04:57.541714  161014 command_runner.go:130] >  Birth: 2025-10-09 18:56:44.405307509 +0000
	I1009 19:04:57.541773  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:04:57.576034  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.576418  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:04:57.610746  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.611106  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:04:57.645558  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.645650  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:04:57.680926  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.681269  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:04:57.716681  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.716965  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:04:57.752444  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.752733  161014 kubeadm.go:400] StartCluster: {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:04:57.752827  161014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:04:57.752877  161014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:04:57.781930  161014 cri.go:89] found id: ""
	I1009 19:04:57.782002  161014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:04:57.790396  161014 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1009 19:04:57.790421  161014 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1009 19:04:57.790427  161014 command_runner.go:130] > /var/lib/minikube/etcd:
	I1009 19:04:57.790446  161014 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:04:57.790453  161014 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:04:57.790499  161014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:04:57.798150  161014 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:04:57.798252  161014 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-158523" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.798307  161014 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-137890/kubeconfig needs updating (will repair): [kubeconfig missing "functional-158523" cluster setting kubeconfig missing "functional-158523" context setting]
	I1009 19:04:57.798648  161014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.799428  161014 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.799625  161014 kapi.go:59] client config for functional-158523: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:04:57.800169  161014 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:04:57.800185  161014 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:04:57.800191  161014 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:04:57.800195  161014 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:04:57.800199  161014 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:04:57.800257  161014 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:04:57.800663  161014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:04:57.808677  161014 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:04:57.808712  161014 kubeadm.go:601] duration metric: took 18.25382ms to restartPrimaryControlPlane
	I1009 19:04:57.808720  161014 kubeadm.go:402] duration metric: took 56.001565ms to StartCluster
	I1009 19:04:57.808736  161014 settings.go:142] acquiring lock: {Name:mk34466033fb866bc8e167d2d953624ad0802283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.808837  161014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.809418  161014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.809652  161014 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:04:57.809720  161014 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:04:57.809869  161014 addons.go:69] Setting storage-provisioner=true in profile "functional-158523"
	I1009 19:04:57.809882  161014 addons.go:69] Setting default-storageclass=true in profile "functional-158523"
	I1009 19:04:57.809890  161014 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:57.809907  161014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-158523"
	I1009 19:04:57.809888  161014 addons.go:238] Setting addon storage-provisioner=true in "functional-158523"
	I1009 19:04:57.809999  161014 host.go:66] Checking if "functional-158523" exists ...
	I1009 19:04:57.810265  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:57.810325  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:57.815899  161014 out.go:179] * Verifying Kubernetes components...
	I1009 19:04:57.817259  161014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:04:57.830319  161014 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.830565  161014 kapi.go:59] client config for functional-158523: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:04:57.830893  161014 addons.go:238] Setting addon default-storageclass=true in "functional-158523"
	I1009 19:04:57.830936  161014 host.go:66] Checking if "functional-158523" exists ...
	I1009 19:04:57.831444  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:57.831697  161014 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:04:57.833512  161014 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:57.833530  161014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:04:57.833580  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:57.856284  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:57.858504  161014 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:57.858545  161014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:04:57.858618  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:57.879618  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:57.916522  161014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:04:57.930660  161014 node_ready.go:35] waiting up to 6m0s for node "functional-158523" to be "Ready" ...
	I1009 19:04:57.930861  161014 type.go:168] "Request Body" body=""
	I1009 19:04:57.930944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:57.931232  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:57.969596  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:57.988544  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:58.026986  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.027037  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.027061  161014 retry.go:31] will retry after 164.488016ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.047051  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.047098  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.047116  161014 retry.go:31] will retry after 194.483244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.192480  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:58.242329  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:58.247629  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.247684  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.247711  161014 retry.go:31] will retry after 217.861079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.297775  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.297841  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.297866  161014 retry.go:31] will retry after 198.924996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.431075  161014 type.go:168] "Request Body" body=""
	I1009 19:04:58.431155  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:58.431537  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:58.466794  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:58.497509  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:58.521187  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.524476  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.524506  161014 retry.go:31] will retry after 579.961825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.549062  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.552103  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.552134  161014 retry.go:31] will retry after 574.521259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.930944  161014 type.go:168] "Request Body" body=""
	I1009 19:04:58.931032  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:58.931452  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:59.104703  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:59.127368  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:59.161080  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:59.161136  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.161156  161014 retry.go:31] will retry after 734.839127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.184025  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:59.184076  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.184098  161014 retry.go:31] will retry after 1.025268007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.431572  161014 type.go:168] "Request Body" body=""
	I1009 19:04:59.431684  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:59.432074  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:59.896539  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:59.931433  161014 type.go:168] "Request Body" body=""
	I1009 19:04:59.931506  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:59.931837  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:04:59.931910  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:04:59.949186  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:59.952452  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.952481  161014 retry.go:31] will retry after 1.084602838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:00.209882  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:00.262148  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:00.265292  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:00.265336  161014 retry.go:31] will retry after 1.287073207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:00.431721  161014 type.go:168] "Request Body" body=""
	I1009 19:05:00.431804  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:00.432145  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:00.931797  161014 type.go:168] "Request Body" body=""
	I1009 19:05:00.931880  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:00.932240  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:01.037525  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:01.094236  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:01.094283  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.094304  161014 retry.go:31] will retry after 1.546934371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.431777  161014 type.go:168] "Request Body" body=""
	I1009 19:05:01.431854  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:01.432251  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:01.553547  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:01.609996  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:01.610065  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.610089  161014 retry.go:31] will retry after 1.923829662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.931551  161014 type.go:168] "Request Body" body=""
	I1009 19:05:01.931629  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:01.931969  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:01.932040  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:02.431907  161014 type.go:168] "Request Body" body=""
	I1009 19:05:02.431987  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:02.432358  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:02.641614  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:02.696762  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:02.699844  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:02.699873  161014 retry.go:31] will retry after 2.36633365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:02.931226  161014 type.go:168] "Request Body" body=""
	I1009 19:05:02.931306  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:02.931737  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:03.431598  161014 type.go:168] "Request Body" body=""
	I1009 19:05:03.431682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:03.432054  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:03.534329  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:03.590565  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:03.590611  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:03.590631  161014 retry.go:31] will retry after 1.952860092s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:03.931329  161014 type.go:168] "Request Body" body=""
	I1009 19:05:03.931427  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:03.931807  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:04.431531  161014 type.go:168] "Request Body" body=""
	I1009 19:05:04.431620  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:04.432004  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:04.432087  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:04.931914  161014 type.go:168] "Request Body" body=""
	I1009 19:05:04.931993  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:04.932341  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:05.066624  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:05.119719  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:05.123044  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.123086  161014 retry.go:31] will retry after 6.108852521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.431602  161014 type.go:168] "Request Body" body=""
	I1009 19:05:05.431719  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:05.432158  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:05.544481  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:05.597312  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:05.600803  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.600837  161014 retry.go:31] will retry after 3.364758217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.931296  161014 type.go:168] "Request Body" body=""
	I1009 19:05:05.931418  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:05.931808  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:06.431397  161014 type.go:168] "Request Body" body=""
	I1009 19:05:06.431479  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:06.431873  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:06.931533  161014 type.go:168] "Request Body" body=""
	I1009 19:05:06.931626  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:06.932024  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:06.932104  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:07.431687  161014 type.go:168] "Request Body" body=""
	I1009 19:05:07.431779  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:07.432140  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:07.930948  161014 type.go:168] "Request Body" body=""
	I1009 19:05:07.931031  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:07.931436  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:08.431020  161014 type.go:168] "Request Body" body=""
	I1009 19:05:08.431105  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:08.431489  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:08.931423  161014 type.go:168] "Request Body" body=""
	I1009 19:05:08.931528  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:08.931995  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:08.966195  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:09.019582  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:09.022605  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:09.022645  161014 retry.go:31] will retry after 7.771885559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:09.431187  161014 type.go:168] "Request Body" body=""
	I1009 19:05:09.431265  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:09.431662  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:09.431745  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:09.931552  161014 type.go:168] "Request Body" body=""
	I1009 19:05:09.931635  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:09.931979  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:10.431855  161014 type.go:168] "Request Body" body=""
	I1009 19:05:10.431945  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:10.432274  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:10.931110  161014 type.go:168] "Request Body" body=""
	I1009 19:05:10.931203  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:10.931572  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:11.233030  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:11.288902  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:11.288953  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:11.288975  161014 retry.go:31] will retry after 3.345246752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:11.431308  161014 type.go:168] "Request Body" body=""
	I1009 19:05:11.431402  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:11.431749  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:11.431819  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:11.931642  161014 type.go:168] "Request Body" body=""
	I1009 19:05:11.931749  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:11.932113  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:12.430947  161014 type.go:168] "Request Body" body=""
	I1009 19:05:12.431034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:12.431445  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:12.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:05:12.931315  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:12.931711  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:13.431639  161014 type.go:168] "Request Body" body=""
	I1009 19:05:13.431724  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:13.432088  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:13.432151  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:13.930962  161014 type.go:168] "Request Body" body=""
	I1009 19:05:13.931048  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:13.931440  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:14.431210  161014 type.go:168] "Request Body" body=""
	I1009 19:05:14.431296  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:14.431700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:14.635101  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:14.689463  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:14.692943  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:14.692988  161014 retry.go:31] will retry after 8.426490786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:14.931454  161014 type.go:168] "Request Body" body=""
	I1009 19:05:14.931531  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:14.931912  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:15.431649  161014 type.go:168] "Request Body" body=""
	I1009 19:05:15.431767  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:15.432139  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:15.432244  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:15.931808  161014 type.go:168] "Request Body" body=""
	I1009 19:05:15.931885  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:15.932226  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:16.430935  161014 type.go:168] "Request Body" body=""
	I1009 19:05:16.431026  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:16.431417  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:16.794854  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:16.849041  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:16.852200  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:16.852234  161014 retry.go:31] will retry after 11.902123756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:16.931535  161014 type.go:168] "Request Body" body=""
	I1009 19:05:16.931634  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:16.931986  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:17.431870  161014 type.go:168] "Request Body" body=""
	I1009 19:05:17.431977  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:17.432410  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:17.432479  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:17.931194  161014 type.go:168] "Request Body" body=""
	I1009 19:05:17.931301  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:17.931659  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:18.431314  161014 type.go:168] "Request Body" body=""
	I1009 19:05:18.431420  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:18.431851  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:18.931802  161014 type.go:168] "Request Body" body=""
	I1009 19:05:18.931891  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:18.932247  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:19.431889  161014 type.go:168] "Request Body" body=""
	I1009 19:05:19.431978  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:19.432365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:19.930982  161014 type.go:168] "Request Body" body=""
	I1009 19:05:19.931070  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:19.931472  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:19.931543  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:20.431080  161014 type.go:168] "Request Body" body=""
	I1009 19:05:20.431159  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:20.431505  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:20.931084  161014 type.go:168] "Request Body" body=""
	I1009 19:05:20.931162  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:20.931465  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:21.431126  161014 type.go:168] "Request Body" body=""
	I1009 19:05:21.431210  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:21.431583  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:21.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:05:21.931310  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:21.931673  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:21.931757  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:22.431250  161014 type.go:168] "Request Body" body=""
	I1009 19:05:22.431335  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:22.431706  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:22.931281  161014 type.go:168] "Request Body" body=""
	I1009 19:05:22.931373  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:22.931764  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:23.120080  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:23.178288  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:23.178344  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:23.178369  161014 retry.go:31] will retry after 12.554942652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:23.431791  161014 type.go:168] "Request Body" body=""
	I1009 19:05:23.431875  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:23.432227  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:23.931667  161014 type.go:168] "Request Body" body=""
	I1009 19:05:23.931758  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:23.932103  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:23.932167  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:24.431140  161014 type.go:168] "Request Body" body=""
	I1009 19:05:24.431223  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:24.431607  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:24.931219  161014 type.go:168] "Request Body" body=""
	I1009 19:05:24.931297  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:24.931656  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:25.431282  161014 type.go:168] "Request Body" body=""
	I1009 19:05:25.431369  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:25.431754  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:25.931371  161014 type.go:168] "Request Body" body=""
	I1009 19:05:25.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:25.931852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:26.431721  161014 type.go:168] "Request Body" body=""
	I1009 19:05:26.431805  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:26.432173  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:26.432243  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:26.931895  161014 type.go:168] "Request Body" body=""
	I1009 19:05:26.931982  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:26.932327  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:27.430978  161014 type.go:168] "Request Body" body=""
	I1009 19:05:27.431069  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:27.431440  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:27.931122  161014 type.go:168] "Request Body" body=""
	I1009 19:05:27.931203  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:27.931568  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:28.431156  161014 type.go:168] "Request Body" body=""
	I1009 19:05:28.431240  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:28.431629  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:28.755128  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:28.809181  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:28.812331  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:28.812369  161014 retry.go:31] will retry after 17.899546939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:28.931943  161014 type.go:168] "Request Body" body=""
	I1009 19:05:28.932042  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:28.932423  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:28.932495  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:29.431031  161014 type.go:168] "Request Body" body=""
	I1009 19:05:29.431117  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:29.431488  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:29.931112  161014 type.go:168] "Request Body" body=""
	I1009 19:05:29.931191  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:29.931549  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:30.431108  161014 type.go:168] "Request Body" body=""
	I1009 19:05:30.431184  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:30.431580  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:30.931234  161014 type.go:168] "Request Body" body=""
	I1009 19:05:30.931310  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:30.931701  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:31.431358  161014 type.go:168] "Request Body" body=""
	I1009 19:05:31.431464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:31.431883  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:31.431968  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:31.931551  161014 type.go:168] "Request Body" body=""
	I1009 19:05:31.931654  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:31.932150  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:32.431843  161014 type.go:168] "Request Body" body=""
	I1009 19:05:32.431928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:32.432325  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:32.930923  161014 type.go:168] "Request Body" body=""
	I1009 19:05:32.931009  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:32.931419  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:33.431049  161014 type.go:168] "Request Body" body=""
	I1009 19:05:33.431139  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:33.431539  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:33.931442  161014 type.go:168] "Request Body" body=""
	I1009 19:05:33.931529  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:33.931921  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:33.931994  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:34.431615  161014 type.go:168] "Request Body" body=""
	I1009 19:05:34.431709  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:34.432084  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:34.931772  161014 type.go:168] "Request Body" body=""
	I1009 19:05:34.931853  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:34.932239  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:35.431990  161014 type.go:168] "Request Body" body=""
	I1009 19:05:35.432083  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:35.432473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:35.733912  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:35.787306  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:35.790843  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:35.790879  161014 retry.go:31] will retry after 31.721699669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:35.931334  161014 type.go:168] "Request Body" body=""
	I1009 19:05:35.931474  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:35.931860  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:36.431788  161014 type.go:168] "Request Body" body=""
	I1009 19:05:36.431878  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:36.432242  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:36.432309  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:36.931065  161014 type.go:168] "Request Body" body=""
	I1009 19:05:36.931156  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:36.931540  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:37.431314  161014 type.go:168] "Request Body" body=""
	I1009 19:05:37.431439  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:37.431797  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:37.931603  161014 type.go:168] "Request Body" body=""
	I1009 19:05:37.931697  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:37.932081  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:38.431682  161014 type.go:168] "Request Body" body=""
	I1009 19:05:38.431775  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:38.432127  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:38.930953  161014 type.go:168] "Request Body" body=""
	I1009 19:05:38.931049  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:38.931414  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:38.931498  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:39.430956  161014 type.go:168] "Request Body" body=""
	I1009 19:05:39.431070  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:39.431453  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:39.931034  161014 type.go:168] "Request Body" body=""
	I1009 19:05:39.931145  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:39.931490  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:40.431075  161014 type.go:168] "Request Body" body=""
	I1009 19:05:40.431166  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:40.431582  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:40.931249  161014 type.go:168] "Request Body" body=""
	I1009 19:05:40.931336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:40.931693  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:40.931756  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:41.431331  161014 type.go:168] "Request Body" body=""
	I1009 19:05:41.431437  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:41.431805  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:41.931445  161014 type.go:168] "Request Body" body=""
	I1009 19:05:41.931535  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:41.931928  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:42.431625  161014 type.go:168] "Request Body" body=""
	I1009 19:05:42.431705  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:42.432064  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:42.931717  161014 type.go:168] "Request Body" body=""
	I1009 19:05:42.931803  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:42.932175  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:42.932247  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:43.430857  161014 type.go:168] "Request Body" body=""
	I1009 19:05:43.430971  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:43.431317  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:43.931149  161014 type.go:168] "Request Body" body=""
	I1009 19:05:43.931232  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:43.931588  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:44.431181  161014 type.go:168] "Request Body" body=""
	I1009 19:05:44.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:44.431639  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:44.931222  161014 type.go:168] "Request Body" body=""
	I1009 19:05:44.931306  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:44.931692  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:45.431277  161014 type.go:168] "Request Body" body=""
	I1009 19:05:45.431360  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:45.431736  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:45.431802  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:45.931357  161014 type.go:168] "Request Body" body=""
	I1009 19:05:45.931462  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:45.931838  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:46.431506  161014 type.go:168] "Request Body" body=""
	I1009 19:05:46.431593  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:46.431956  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:46.712449  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:46.768626  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:46.768679  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:46.768704  161014 retry.go:31] will retry after 25.41172348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:46.930938  161014 type.go:168] "Request Body" body=""
	I1009 19:05:46.931055  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:46.931460  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:47.431076  161014 type.go:168] "Request Body" body=""
	I1009 19:05:47.431153  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:47.431556  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:47.931415  161014 type.go:168] "Request Body" body=""
	I1009 19:05:47.931510  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:47.931879  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:47.931959  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:48.431674  161014 type.go:168] "Request Body" body=""
	I1009 19:05:48.431759  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:48.432094  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:48.930896  161014 type.go:168] "Request Body" body=""
	I1009 19:05:48.931001  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:48.931373  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:49.430996  161014 type.go:168] "Request Body" body=""
	I1009 19:05:49.431076  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:49.431465  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:49.931262  161014 type.go:168] "Request Body" body=""
	I1009 19:05:49.931370  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:49.931789  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:50.431699  161014 type.go:168] "Request Body" body=""
	I1009 19:05:50.431782  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:50.432126  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:50.432204  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:50.930957  161014 type.go:168] "Request Body" body=""
	I1009 19:05:50.931084  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:50.931482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:51.431250  161014 type.go:168] "Request Body" body=""
	I1009 19:05:51.431347  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:51.431706  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:51.931601  161014 type.go:168] "Request Body" body=""
	I1009 19:05:51.931698  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:51.932063  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:52.430862  161014 type.go:168] "Request Body" body=""
	I1009 19:05:52.430953  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:52.431298  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:52.931085  161014 type.go:168] "Request Body" body=""
	I1009 19:05:52.931179  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:52.931557  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:52.931624  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:53.431339  161014 type.go:168] "Request Body" body=""
	I1009 19:05:53.431459  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:53.431829  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:53.931637  161014 type.go:168] "Request Body" body=""
	I1009 19:05:53.931736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:53.932120  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:54.430920  161014 type.go:168] "Request Body" body=""
	I1009 19:05:54.431014  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:54.431426  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:54.931205  161014 type.go:168] "Request Body" body=""
	I1009 19:05:54.931299  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:54.931695  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:54.931776  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:55.431596  161014 type.go:168] "Request Body" body=""
	I1009 19:05:55.431674  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:55.432023  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:55.931858  161014 type.go:168] "Request Body" body=""
	I1009 19:05:55.931949  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:55.932317  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:56.431017  161014 type.go:168] "Request Body" body=""
	I1009 19:05:56.431102  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:56.431477  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:56.931242  161014 type.go:168] "Request Body" body=""
	I1009 19:05:56.931328  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:56.931740  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:56.931822  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:57.431701  161014 type.go:168] "Request Body" body=""
	I1009 19:05:57.431787  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:57.432169  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:57.931004  161014 type.go:168] "Request Body" body=""
	I1009 19:05:57.931088  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:57.931492  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:58.430896  161014 type.go:168] "Request Body" body=""
	I1009 19:05:58.430977  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:58.431316  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:58.931136  161014 type.go:168] "Request Body" body=""
	I1009 19:05:58.931305  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:58.931701  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:59.431527  161014 type.go:168] "Request Body" body=""
	I1009 19:05:59.431619  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:59.431986  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:59.432056  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:59.931914  161014 type.go:168] "Request Body" body=""
	I1009 19:05:59.932022  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:59.932451  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:00.431235  161014 type.go:168] "Request Body" body=""
	I1009 19:06:00.431321  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:00.431700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:00.931491  161014 type.go:168] "Request Body" body=""
	I1009 19:06:00.931598  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:00.932038  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:01.430870  161014 type.go:168] "Request Body" body=""
	I1009 19:06:01.430962  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:01.431351  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:01.931160  161014 type.go:168] "Request Body" body=""
	I1009 19:06:01.931259  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:01.931701  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:01.931781  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:02.431642  161014 type.go:168] "Request Body" body=""
	I1009 19:06:02.431741  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:02.432105  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:02.930912  161014 type.go:168] "Request Body" body=""
	I1009 19:06:02.931026  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:02.931440  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:03.431233  161014 type.go:168] "Request Body" body=""
	I1009 19:06:03.431316  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:03.431698  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:03.931548  161014 type.go:168] "Request Body" body=""
	I1009 19:06:03.931627  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:03.932000  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:03.932085  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:04.431884  161014 type.go:168] "Request Body" body=""
	I1009 19:06:04.431963  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:04.432329  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:04.931167  161014 type.go:168] "Request Body" body=""
	I1009 19:06:04.931256  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:04.931675  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:05.431519  161014 type.go:168] "Request Body" body=""
	I1009 19:06:05.431593  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:05.431983  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:05.931927  161014 type.go:168] "Request Body" body=""
	I1009 19:06:05.932019  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:05.932421  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:05.932517  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:06.431278  161014 type.go:168] "Request Body" body=""
	I1009 19:06:06.431359  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:06.431798  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:06.931667  161014 type.go:168] "Request Body" body=""
	I1009 19:06:06.931753  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:06.932149  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:07.430942  161014 type.go:168] "Request Body" body=""
	I1009 19:06:07.431028  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:07.431419  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:07.513672  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:06:07.571073  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:07.571125  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:07.571145  161014 retry.go:31] will retry after 23.39838606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:07.931687  161014 type.go:168] "Request Body" body=""
	I1009 19:06:07.931784  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:07.932135  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:08.430924  161014 type.go:168] "Request Body" body=""
	I1009 19:06:08.431034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:08.431403  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:08.431469  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:08.931208  161014 type.go:168] "Request Body" body=""
	I1009 19:06:08.931288  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:08.931643  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:09.431520  161014 type.go:168] "Request Body" body=""
	I1009 19:06:09.431629  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:09.432018  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:09.931868  161014 type.go:168] "Request Body" body=""
	I1009 19:06:09.931945  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:09.932304  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:10.431151  161014 type.go:168] "Request Body" body=""
	I1009 19:06:10.431248  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:10.431669  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:10.431738  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:10.931500  161014 type.go:168] "Request Body" body=""
	I1009 19:06:10.931584  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:10.931948  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:11.431952  161014 type.go:168] "Request Body" body=""
	I1009 19:06:11.432052  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:11.432455  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:11.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:06:11.931310  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:11.931689  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:12.181131  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:06:12.238294  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:12.238358  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:12.238405  161014 retry.go:31] will retry after 21.481583015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:12.431681  161014 type.go:168] "Request Body" body=""
	I1009 19:06:12.431761  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:12.432057  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:12.432128  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:12.931845  161014 type.go:168] "Request Body" body=""
	I1009 19:06:12.931939  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:12.932415  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:13.431004  161014 type.go:168] "Request Body" body=""
	I1009 19:06:13.431117  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:13.431483  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:13.931308  161014 type.go:168] "Request Body" body=""
	I1009 19:06:13.931404  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:13.931807  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:14.431415  161014 type.go:168] "Request Body" body=""
	I1009 19:06:14.431502  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:14.431906  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:14.931635  161014 type.go:168] "Request Body" body=""
	I1009 19:06:14.931725  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:14.932138  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:14.932205  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:15.431840  161014 type.go:168] "Request Body" body=""
	I1009 19:06:15.431927  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:15.432292  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:15.930896  161014 type.go:168] "Request Body" body=""
	I1009 19:06:15.930996  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:15.931404  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:16.431000  161014 type.go:168] "Request Body" body=""
	I1009 19:06:16.431088  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:16.431482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:16.931089  161014 type.go:168] "Request Body" body=""
	I1009 19:06:16.931169  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:16.931606  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:17.431187  161014 type.go:168] "Request Body" body=""
	I1009 19:06:17.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:17.431645  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:17.431717  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:17.931505  161014 type.go:168] "Request Body" body=""
	I1009 19:06:17.931588  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:17.931977  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:18.431663  161014 type.go:168] "Request Body" body=""
	I1009 19:06:18.431753  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:18.432133  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:18.931039  161014 type.go:168] "Request Body" body=""
	I1009 19:06:18.931125  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:18.931496  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:19.431026  161014 type.go:168] "Request Body" body=""
	I1009 19:06:19.431101  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:19.431425  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:19.931079  161014 type.go:168] "Request Body" body=""
	I1009 19:06:19.931160  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:19.931533  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:19.931605  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:20.431142  161014 type.go:168] "Request Body" body=""
	I1009 19:06:20.431225  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:20.431606  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:20.931205  161014 type.go:168] "Request Body" body=""
	I1009 19:06:20.931288  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:20.931685  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:21.431270  161014 type.go:168] "Request Body" body=""
	I1009 19:06:21.431352  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:21.431727  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:21.931351  161014 type.go:168] "Request Body" body=""
	I1009 19:06:21.931459  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:21.931867  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:21.931960  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:22.431630  161014 type.go:168] "Request Body" body=""
	I1009 19:06:22.431720  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:22.432112  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:22.931909  161014 type.go:168] "Request Body" body=""
	I1009 19:06:22.932006  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:22.932466  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:23.431019  161014 type.go:168] "Request Body" body=""
	I1009 19:06:23.431108  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:23.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:23.931352  161014 type.go:168] "Request Body" body=""
	I1009 19:06:23.931464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:23.931866  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:24.430863  161014 type.go:168] "Request Body" body=""
	I1009 19:06:24.430951  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:24.431355  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:24.431478  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:24.930971  161014 type.go:168] "Request Body" body=""
	I1009 19:06:24.931061  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:24.931476  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:25.431052  161014 type.go:168] "Request Body" body=""
	I1009 19:06:25.431143  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:25.431497  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:25.931072  161014 type.go:168] "Request Body" body=""
	I1009 19:06:25.931164  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:25.931594  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:26.430916  161014 type.go:168] "Request Body" body=""
	I1009 19:06:26.431010  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:26.431407  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:26.931057  161014 type.go:168] "Request Body" body=""
	I1009 19:06:26.931135  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:26.931533  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:26.931610  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:27.431142  161014 type.go:168] "Request Body" body=""
	I1009 19:06:27.431220  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:27.431622  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:27.931665  161014 type.go:168] "Request Body" body=""
	I1009 19:06:27.931758  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:27.932163  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:28.431861  161014 type.go:168] "Request Body" body=""
	I1009 19:06:28.431949  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:28.432310  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:28.931285  161014 type.go:168] "Request Body" body=""
	I1009 19:06:28.931362  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:28.931821  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:28.931892  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:29.431462  161014 type.go:168] "Request Body" body=""
	I1009 19:06:29.431547  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:29.432004  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:29.931701  161014 type.go:168] "Request Body" body=""
	I1009 19:06:29.931782  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:29.932180  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:30.431935  161014 type.go:168] "Request Body" body=""
	I1009 19:06:30.432026  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:30.432405  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:30.931026  161014 type.go:168] "Request Body" body=""
	I1009 19:06:30.931109  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:30.931522  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:30.970755  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:06:31.028107  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:31.028174  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:31.028309  161014 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:06:31.431764  161014 type.go:168] "Request Body" body=""
	I1009 19:06:31.431853  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:31.432208  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:31.432284  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:31.930867  161014 type.go:168] "Request Body" body=""
	I1009 19:06:31.930984  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:31.931336  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:32.430958  161014 type.go:168] "Request Body" body=""
	I1009 19:06:32.431047  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:32.431465  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:32.931031  161014 type.go:168] "Request Body" body=""
	I1009 19:06:32.931127  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:32.931496  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:33.431116  161014 type.go:168] "Request Body" body=""
	I1009 19:06:33.431195  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:33.431601  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:33.721082  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:06:33.781514  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:33.781597  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:33.781723  161014 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:06:33.784570  161014 out.go:179] * Enabled addons: 
	I1009 19:06:33.786444  161014 addons.go:514] duration metric: took 1m35.976729521s for enable addons: enabled=[]
	I1009 19:06:33.931193  161014 type.go:168] "Request Body" body=""
	I1009 19:06:33.931298  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:33.931708  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:33.931785  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:34.431608  161014 type.go:168] "Request Body" body=""
	I1009 19:06:34.431688  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:34.432050  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:34.931894  161014 type.go:168] "Request Body" body=""
	I1009 19:06:34.931982  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:34.932369  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:35.431177  161014 type.go:168] "Request Body" body=""
	I1009 19:06:35.431261  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:35.431656  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:35.931508  161014 type.go:168] "Request Body" body=""
	I1009 19:06:35.931632  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:35.932017  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:35.932080  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:36.431933  161014 type.go:168] "Request Body" body=""
	I1009 19:06:36.432042  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:36.432446  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:36.931225  161014 type.go:168] "Request Body" body=""
	I1009 19:06:36.931328  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:36.931704  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:37.431625  161014 type.go:168] "Request Body" body=""
	I1009 19:06:37.431738  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:37.432141  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:37.930857  161014 type.go:168] "Request Body" body=""
	I1009 19:06:37.930995  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:37.931342  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:38.431133  161014 type.go:168] "Request Body" body=""
	I1009 19:06:38.431214  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:38.431597  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:38.431683  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:38.931462  161014 type.go:168] "Request Body" body=""
	I1009 19:06:38.931563  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:38.931971  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:39.431871  161014 type.go:168] "Request Body" body=""
	I1009 19:06:39.431963  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:39.432315  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:39.931128  161014 type.go:168] "Request Body" body=""
	I1009 19:06:39.931220  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:39.931618  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:40.431437  161014 type.go:168] "Request Body" body=""
	I1009 19:06:40.431514  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:40.431890  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:40.431961  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:40.931810  161014 type.go:168] "Request Body" body=""
	I1009 19:06:40.931912  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:40.932290  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:41.431100  161014 type.go:168] "Request Body" body=""
	I1009 19:06:41.431218  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:41.431599  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:41.931346  161014 type.go:168] "Request Body" body=""
	I1009 19:06:41.931468  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:41.931837  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:42.431757  161014 type.go:168] "Request Body" body=""
	I1009 19:06:42.431845  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:42.432237  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:42.432298  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:42.931019  161014 type.go:168] "Request Body" body=""
	I1009 19:06:42.931113  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:42.931521  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:43.431303  161014 type.go:168] "Request Body" body=""
	I1009 19:06:43.431415  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:43.431782  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:43.931780  161014 type.go:168] "Request Body" body=""
	I1009 19:06:43.931864  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:43.932272  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:44.431107  161014 type.go:168] "Request Body" body=""
	I1009 19:06:44.431212  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:44.431609  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:44.931522  161014 type.go:168] "Request Body" body=""
	I1009 19:06:44.931612  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:44.932005  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:44.932091  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:45.430863  161014 type.go:168] "Request Body" body=""
	I1009 19:06:45.430955  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:45.431333  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:45.931195  161014 type.go:168] "Request Body" body=""
	I1009 19:06:45.931296  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:45.931727  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:46.431598  161014 type.go:168] "Request Body" body=""
	I1009 19:06:46.431682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:46.432089  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:46.930909  161014 type.go:168] "Request Body" body=""
	I1009 19:06:46.931014  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:46.931410  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:47.431166  161014 type.go:168] "Request Body" body=""
	I1009 19:06:47.431244  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:47.431610  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:47.431679  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:47.931409  161014 type.go:168] "Request Body" body=""
	I1009 19:06:47.931495  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:47.931852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:48.431707  161014 type.go:168] "Request Body" body=""
	I1009 19:06:48.431803  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:48.432224  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:48.931113  161014 type.go:168] "Request Body" body=""
	I1009 19:06:48.931196  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:48.931590  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:49.431438  161014 type.go:168] "Request Body" body=""
	I1009 19:06:49.431532  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:49.431933  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:49.432014  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:49.931847  161014 type.go:168] "Request Body" body=""
	I1009 19:06:49.931955  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:49.932365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:50.431151  161014 type.go:168] "Request Body" body=""
	I1009 19:06:50.431252  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:50.431731  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:50.931584  161014 type.go:168] "Request Body" body=""
	I1009 19:06:50.931668  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:50.932034  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:51.431892  161014 type.go:168] "Request Body" body=""
	I1009 19:06:51.432001  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:51.432357  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:51.432451  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:51.931169  161014 type.go:168] "Request Body" body=""
	I1009 19:06:51.931251  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:51.931649  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:52.431585  161014 type.go:168] "Request Body" body=""
	I1009 19:06:52.431683  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:52.432058  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:52.931903  161014 type.go:168] "Request Body" body=""
	I1009 19:06:52.931994  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:52.932365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:53.431140  161014 type.go:168] "Request Body" body=""
	I1009 19:06:53.431240  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:53.431607  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:53.931515  161014 type.go:168] "Request Body" body=""
	I1009 19:06:53.931602  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:53.931970  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:53.932045  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:54.431874  161014 type.go:168] "Request Body" body=""
	I1009 19:06:54.431956  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:54.432333  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:54.931110  161014 type.go:168] "Request Body" body=""
	I1009 19:06:54.931191  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:54.931572  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:55.431313  161014 type.go:168] "Request Body" body=""
	I1009 19:06:55.431422  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:55.431759  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:55.931628  161014 type.go:168] "Request Body" body=""
	I1009 19:06:55.931708  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:55.932052  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:55.932122  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:56.430861  161014 type.go:168] "Request Body" body=""
	I1009 19:06:56.430953  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:56.431299  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:56.931073  161014 type.go:168] "Request Body" body=""
	I1009 19:06:56.931162  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:56.931537  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:57.431318  161014 type.go:168] "Request Body" body=""
	I1009 19:06:57.431417  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:57.431759  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:57.931618  161014 type.go:168] "Request Body" body=""
	I1009 19:06:57.931839  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:57.932218  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:57.932279  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:58.431144  161014 type.go:168] "Request Body" body=""
	I1009 19:06:58.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:58.431970  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:58.931861  161014 type.go:168] "Request Body" body=""
	I1009 19:06:58.931940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:58.932311  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:59.431143  161014 type.go:168] "Request Body" body=""
	I1009 19:06:59.431223  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:59.431592  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:59.930909  161014 type.go:168] "Request Body" body=""
	I1009 19:06:59.931020  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:59.931371  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:00.430999  161014 type.go:168] "Request Body" body=""
	I1009 19:07:00.431081  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:00.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:00.431566  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:00.931093  161014 type.go:168] "Request Body" body=""
	I1009 19:07:00.931180  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:00.931581  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:01.431360  161014 type.go:168] "Request Body" body=""
	I1009 19:07:01.431464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:01.431832  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:01.931692  161014 type.go:168] "Request Body" body=""
	I1009 19:07:01.931784  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:01.932184  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:02.430934  161014 type.go:168] "Request Body" body=""
	I1009 19:07:02.431018  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:02.431378  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:02.931191  161014 type.go:168] "Request Body" body=""
	I1009 19:07:02.931275  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:02.931689  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:02.931756  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:03.431523  161014 type.go:168] "Request Body" body=""
	I1009 19:07:03.431604  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:03.431991  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:03.930871  161014 type.go:168] "Request Body" body=""
	I1009 19:07:03.930969  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:03.931407  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:04.431200  161014 type.go:168] "Request Body" body=""
	I1009 19:07:04.431281  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:04.431686  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:04.931603  161014 type.go:168] "Request Body" body=""
	I1009 19:07:04.931684  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:04.932085  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:04.932154  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:05.430888  161014 type.go:168] "Request Body" body=""
	I1009 19:07:05.430980  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:05.431365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:05.931176  161014 type.go:168] "Request Body" body=""
	I1009 19:07:05.931266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:05.931718  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:06.431607  161014 type.go:168] "Request Body" body=""
	I1009 19:07:06.431688  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:06.432075  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:06.930900  161014 type.go:168] "Request Body" body=""
	I1009 19:07:06.931004  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:06.931420  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:07.431211  161014 type.go:168] "Request Body" body=""
	I1009 19:07:07.431297  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:07.431674  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:07.431738  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:07.931521  161014 type.go:168] "Request Body" body=""
	I1009 19:07:07.931600  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:07.931988  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:08.431938  161014 type.go:168] "Request Body" body=""
	I1009 19:07:08.432023  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:08.432368  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:08.931198  161014 type.go:168] "Request Body" body=""
	I1009 19:07:08.931276  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:08.931670  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:09.431634  161014 type.go:168] "Request Body" body=""
	I1009 19:07:09.431764  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:09.432193  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:09.432271  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:09.931021  161014 type.go:168] "Request Body" body=""
	I1009 19:07:09.931112  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:09.931511  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:10.431319  161014 type.go:168] "Request Body" body=""
	I1009 19:07:10.431421  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:10.431776  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:10.931586  161014 type.go:168] "Request Body" body=""
	I1009 19:07:10.931675  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:10.932028  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:11.431928  161014 type.go:168] "Request Body" body=""
	I1009 19:07:11.432018  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:11.432409  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:11.432493  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:11.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:07:11.931314  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:11.931691  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:12.431493  161014 type.go:168] "Request Body" body=""
	I1009 19:07:12.431576  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:12.431970  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:12.931830  161014 type.go:168] "Request Body" body=""
	I1009 19:07:12.931910  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:12.932268  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:13.431040  161014 type.go:168] "Request Body" body=""
	I1009 19:07:13.431128  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:13.431515  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:13.931313  161014 type.go:168] "Request Body" body=""
	I1009 19:07:13.931411  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:13.931829  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:13.931895  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:14.431732  161014 type.go:168] "Request Body" body=""
	I1009 19:07:14.431830  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:14.432198  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:14.931016  161014 type.go:168] "Request Body" body=""
	I1009 19:07:14.931107  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:14.931472  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:15.431233  161014 type.go:168] "Request Body" body=""
	I1009 19:07:15.431326  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:15.431754  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:15.931605  161014 type.go:168] "Request Body" body=""
	I1009 19:07:15.931691  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:15.932042  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:15.932112  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:16.430847  161014 type.go:168] "Request Body" body=""
	I1009 19:07:16.430926  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:16.431288  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:16.931059  161014 type.go:168] "Request Body" body=""
	I1009 19:07:16.931135  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:16.931483  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:17.431236  161014 type.go:168] "Request Body" body=""
	I1009 19:07:17.431328  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:17.431725  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:17.931584  161014 type.go:168] "Request Body" body=""
	I1009 19:07:17.931680  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:17.932068  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:17.932144  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:18.430878  161014 type.go:168] "Request Body" body=""
	I1009 19:07:18.430959  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:18.431336  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:18.931220  161014 type.go:168] "Request Body" body=""
	I1009 19:07:18.931308  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:18.931716  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:19.431622  161014 type.go:168] "Request Body" body=""
	I1009 19:07:19.431711  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:19.432084  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:19.930887  161014 type.go:168] "Request Body" body=""
	I1009 19:07:19.930970  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:19.931335  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:20.431128  161014 type.go:168] "Request Body" body=""
	I1009 19:07:20.431228  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:20.431607  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:20.431677  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:20.931571  161014 type.go:168] "Request Body" body=""
	I1009 19:07:20.931652  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:20.932025  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:21.431914  161014 type.go:168] "Request Body" body=""
	I1009 19:07:21.432004  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:21.432437  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:21.931260  161014 type.go:168] "Request Body" body=""
	I1009 19:07:21.931349  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:21.931776  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:22.431637  161014 type.go:168] "Request Body" body=""
	I1009 19:07:22.431729  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:22.432091  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:22.432158  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:22.930926  161014 type.go:168] "Request Body" body=""
	I1009 19:07:22.931021  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:22.931412  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:23.431182  161014 type.go:168] "Request Body" body=""
	I1009 19:07:23.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:23.431631  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:23.931458  161014 type.go:168] "Request Body" body=""
	I1009 19:07:23.931550  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:23.931920  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:24.431853  161014 type.go:168] "Request Body" body=""
	I1009 19:07:24.431948  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:24.432326  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:24.432422  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:24.931143  161014 type.go:168] "Request Body" body=""
	I1009 19:07:24.931223  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:24.931599  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:25.431358  161014 type.go:168] "Request Body" body=""
	I1009 19:07:25.431464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:25.431821  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:25.931703  161014 type.go:168] "Request Body" body=""
	I1009 19:07:25.931787  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:25.932180  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:26.430976  161014 type.go:168] "Request Body" body=""
	I1009 19:07:26.431075  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:26.431458  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:26.931245  161014 type.go:168] "Request Body" body=""
	I1009 19:07:26.931331  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:26.931713  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:26.931784  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:27.431576  161014 type.go:168] "Request Body" body=""
	I1009 19:07:27.431668  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:27.432031  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:27.931772  161014 type.go:168] "Request Body" body=""
	I1009 19:07:27.931862  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:27.932254  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:28.431022  161014 type.go:168] "Request Body" body=""
	I1009 19:07:28.431102  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:28.431482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:28.931348  161014 type.go:168] "Request Body" body=""
	I1009 19:07:28.931459  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:28.931844  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:28.931911  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:29.431781  161014 type.go:168] "Request Body" body=""
	I1009 19:07:29.431865  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:29.432226  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:29.931019  161014 type.go:168] "Request Body" body=""
	I1009 19:07:29.931099  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:29.931495  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:30.431235  161014 type.go:168] "Request Body" body=""
	I1009 19:07:30.431318  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:30.431699  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:30.931618  161014 type.go:168] "Request Body" body=""
	I1009 19:07:30.931726  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:30.932096  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:30.932155  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:31.430950  161014 type.go:168] "Request Body" body=""
	I1009 19:07:31.431039  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:31.431429  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:31.931226  161014 type.go:168] "Request Body" body=""
	I1009 19:07:31.931324  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:31.931743  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:32.431688  161014 type.go:168] "Request Body" body=""
	I1009 19:07:32.431781  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:32.432184  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:32.930987  161014 type.go:168] "Request Body" body=""
	I1009 19:07:32.931070  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:32.931473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:33.431239  161014 type.go:168] "Request Body" body=""
	I1009 19:07:33.431321  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:33.431722  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:33.431792  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:33.931516  161014 type.go:168] "Request Body" body=""
	I1009 19:07:33.931606  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:33.931963  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:34.431848  161014 type.go:168] "Request Body" body=""
	I1009 19:07:34.431929  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:34.432337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:34.931149  161014 type.go:168] "Request Body" body=""
	I1009 19:07:34.931233  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:34.931610  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:35.431433  161014 type.go:168] "Request Body" body=""
	I1009 19:07:35.431519  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:35.431884  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:35.431951  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:35.931757  161014 type.go:168] "Request Body" body=""
	I1009 19:07:35.931834  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:35.932194  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:36.431002  161014 type.go:168] "Request Body" body=""
	I1009 19:07:36.431092  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:36.431521  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:36.931304  161014 type.go:168] "Request Body" body=""
	I1009 19:07:36.931415  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:36.931771  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:37.431635  161014 type.go:168] "Request Body" body=""
	I1009 19:07:37.431735  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:37.432135  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:37.432203  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:37.931637  161014 type.go:168] "Request Body" body=""
	I1009 19:07:37.931755  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:37.932124  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:38.430922  161014 type.go:168] "Request Body" body=""
	I1009 19:07:38.431020  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:38.431405  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:38.931217  161014 type.go:168] "Request Body" body=""
	I1009 19:07:38.931295  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:38.931651  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:39.431495  161014 type.go:168] "Request Body" body=""
	I1009 19:07:39.431575  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:39.431936  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:39.931858  161014 type.go:168] "Request Body" body=""
	I1009 19:07:39.931940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:39.932326  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:39.932421  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:40.431161  161014 type.go:168] "Request Body" body=""
	I1009 19:07:40.431255  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:40.431615  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:40.931366  161014 type.go:168] "Request Body" body=""
	I1009 19:07:40.931491  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:40.931869  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:41.431767  161014 type.go:168] "Request Body" body=""
	I1009 19:07:41.431861  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:41.432350  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:41.931160  161014 type.go:168] "Request Body" body=""
	I1009 19:07:41.931250  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:41.931735  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:42.431633  161014 type.go:168] "Request Body" body=""
	I1009 19:07:42.431732  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:42.432111  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:42.432176  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:42.930929  161014 type.go:168] "Request Body" body=""
	I1009 19:07:42.931031  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:42.931442  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:43.431234  161014 type.go:168] "Request Body" body=""
	I1009 19:07:43.431336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:43.431722  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:43.931601  161014 type.go:168] "Request Body" body=""
	I1009 19:07:43.931683  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:43.932053  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:44.430865  161014 type.go:168] "Request Body" body=""
	I1009 19:07:44.430947  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:44.431356  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:44.931167  161014 type.go:168] "Request Body" body=""
	I1009 19:07:44.931250  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:44.931627  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:44.931696  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:45.431431  161014 type.go:168] "Request Body" body=""
	I1009 19:07:45.431510  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:45.431847  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:45.931770  161014 type.go:168] "Request Body" body=""
	I1009 19:07:45.931853  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:45.932210  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:46.430939  161014 type.go:168] "Request Body" body=""
	I1009 19:07:46.431018  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:46.431347  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:46.931133  161014 type.go:168] "Request Body" body=""
	I1009 19:07:46.931213  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:46.931599  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:47.431337  161014 type.go:168] "Request Body" body=""
	I1009 19:07:47.431449  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:47.431806  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:47.431876  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:47.931598  161014 type.go:168] "Request Body" body=""
	I1009 19:07:47.931682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:47.932028  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:48.431835  161014 type.go:168] "Request Body" body=""
	I1009 19:07:48.431919  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:48.432273  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:48.931089  161014 type.go:168] "Request Body" body=""
	I1009 19:07:48.931179  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:48.931527  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:49.431272  161014 type.go:168] "Request Body" body=""
	I1009 19:07:49.431350  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:49.431715  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:49.931579  161014 type.go:168] "Request Body" body=""
	I1009 19:07:49.931664  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:49.932040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:49.932107  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:50.431582  161014 type.go:168] "Request Body" body=""
	I1009 19:07:50.431662  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:50.432003  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:50.931872  161014 type.go:168] "Request Body" body=""
	I1009 19:07:50.931951  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:50.932322  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:51.431016  161014 type.go:168] "Request Body" body=""
	I1009 19:07:51.431095  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:51.431478  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:51.931270  161014 type.go:168] "Request Body" body=""
	I1009 19:07:51.931349  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:51.931734  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:52.431662  161014 type.go:168] "Request Body" body=""
	I1009 19:07:52.431743  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:52.432165  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:52.432255  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:52.931027  161014 type.go:168] "Request Body" body=""
	I1009 19:07:52.931111  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:52.931524  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:53.431299  161014 type.go:168] "Request Body" body=""
	I1009 19:07:53.431409  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:53.431777  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:53.931692  161014 type.go:168] "Request Body" body=""
	I1009 19:07:53.931802  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:53.932188  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:54.431033  161014 type.go:168] "Request Body" body=""
	I1009 19:07:54.431116  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:54.431506  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:54.931262  161014 type.go:168] "Request Body" body=""
	I1009 19:07:54.931371  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:54.931818  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:54.931896  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:55.431748  161014 type.go:168] "Request Body" body=""
	I1009 19:07:55.431839  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:55.432227  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:55.931001  161014 type.go:168] "Request Body" body=""
	I1009 19:07:55.931091  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:55.931464  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:56.431257  161014 type.go:168] "Request Body" body=""
	I1009 19:07:56.431342  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:56.431727  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:56.931602  161014 type.go:168] "Request Body" body=""
	I1009 19:07:56.931701  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:56.932081  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:56.932152  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:57.430910  161014 type.go:168] "Request Body" body=""
	I1009 19:07:57.431002  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:57.431362  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:57.931308  161014 type.go:168] "Request Body" body=""
	I1009 19:07:57.931413  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:57.931773  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:58.431643  161014 type.go:168] "Request Body" body=""
	I1009 19:07:58.431802  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:58.432134  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:58.931081  161014 type.go:168] "Request Body" body=""
	I1009 19:07:58.931169  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:58.931540  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:59.431310  161014 type.go:168] "Request Body" body=""
	I1009 19:07:59.431416  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:59.431835  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:59.431910  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:59.931738  161014 type.go:168] "Request Body" body=""
	I1009 19:07:59.931826  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:59.932198  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:00.430977  161014 type.go:168] "Request Body" body=""
	I1009 19:08:00.431073  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:00.431459  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:00.931240  161014 type.go:168] "Request Body" body=""
	I1009 19:08:00.931327  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:00.931726  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:01.431608  161014 type.go:168] "Request Body" body=""
	I1009 19:08:01.431703  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:01.432081  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:01.432155  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:01.930901  161014 type.go:168] "Request Body" body=""
	I1009 19:08:01.930998  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:01.931353  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:02.431155  161014 type.go:168] "Request Body" body=""
	I1009 19:08:02.431246  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:02.431683  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:02.931507  161014 type.go:168] "Request Body" body=""
	I1009 19:08:02.931648  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:02.932004  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:03.431604  161014 type.go:168] "Request Body" body=""
	I1009 19:08:03.431682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:03.432043  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:03.930851  161014 type.go:168] "Request Body" body=""
	I1009 19:08:03.930932  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:03.931328  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:03.931434  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:04.431148  161014 type.go:168] "Request Body" body=""
	I1009 19:08:04.431226  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:04.431671  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:04.931497  161014 type.go:168] "Request Body" body=""
	I1009 19:08:04.931576  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:04.931933  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:05.431818  161014 type.go:168] "Request Body" body=""
	I1009 19:08:05.431913  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:05.432337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:05.931111  161014 type.go:168] "Request Body" body=""
	I1009 19:08:05.931188  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:05.931598  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:05.931665  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:06.431433  161014 type.go:168] "Request Body" body=""
	I1009 19:08:06.431518  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:06.431897  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:06.931739  161014 type.go:168] "Request Body" body=""
	I1009 19:08:06.931825  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:06.932190  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:07.431010  161014 type.go:168] "Request Body" body=""
	I1009 19:08:07.431098  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:07.431492  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:07.931321  161014 type.go:168] "Request Body" body=""
	I1009 19:08:07.931478  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:07.931847  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:07.931911  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:08.431736  161014 type.go:168] "Request Body" body=""
	I1009 19:08:08.431826  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:08.432199  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:08.931147  161014 type.go:168] "Request Body" body=""
	I1009 19:08:08.931256  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:08.931581  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:09.431348  161014 type.go:168] "Request Body" body=""
	I1009 19:08:09.431501  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:09.431847  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:09.931761  161014 type.go:168] "Request Body" body=""
	I1009 19:08:09.931868  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:09.932264  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:09.932358  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:10.431111  161014 type.go:168] "Request Body" body=""
	I1009 19:08:10.431226  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:10.431600  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:10.931402  161014 type.go:168] "Request Body" body=""
	I1009 19:08:10.931502  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:10.931871  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:11.431784  161014 type.go:168] "Request Body" body=""
	I1009 19:08:11.431872  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:11.432233  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:11.931048  161014 type.go:168] "Request Body" body=""
	I1009 19:08:11.931144  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:11.931576  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:12.431421  161014 type.go:168] "Request Body" body=""
	I1009 19:08:12.431503  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:12.431862  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:12.431928  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:12.931757  161014 type.go:168] "Request Body" body=""
	I1009 19:08:12.931854  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:12.932305  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:13.431097  161014 type.go:168] "Request Body" body=""
	I1009 19:08:13.431185  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:13.431628  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:13.931448  161014 type.go:168] "Request Body" body=""
	I1009 19:08:13.931544  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:13.931895  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:14.431813  161014 type.go:168] "Request Body" body=""
	I1009 19:08:14.431896  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:14.432325  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:14.432452  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:14.931193  161014 type.go:168] "Request Body" body=""
	I1009 19:08:14.931304  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:14.931724  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:15.431610  161014 type.go:168] "Request Body" body=""
	I1009 19:08:15.431784  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:15.432189  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:15.930996  161014 type.go:168] "Request Body" body=""
	I1009 19:08:15.931076  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:15.931476  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:16.431279  161014 type.go:168] "Request Body" body=""
	I1009 19:08:16.431364  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:16.431823  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:16.931708  161014 type.go:168] "Request Body" body=""
	I1009 19:08:16.931791  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:16.932165  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:16.932241  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:17.430990  161014 type.go:168] "Request Body" body=""
	I1009 19:08:17.431074  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:17.431506  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:17.931431  161014 type.go:168] "Request Body" body=""
	I1009 19:08:17.931525  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:17.931892  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:18.431806  161014 type.go:168] "Request Body" body=""
	I1009 19:08:18.431897  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:18.432299  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:18.931120  161014 type.go:168] "Request Body" body=""
	I1009 19:08:18.931214  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:18.931614  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:19.431514  161014 type.go:168] "Request Body" body=""
	I1009 19:08:19.431606  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:19.432047  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:19.432124  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:19.931598  161014 type.go:168] "Request Body" body=""
	I1009 19:08:19.931691  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:19.932042  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:20.431891  161014 type.go:168] "Request Body" body=""
	I1009 19:08:20.431971  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:20.432405  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:20.931180  161014 type.go:168] "Request Body" body=""
	I1009 19:08:20.931263  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:20.931621  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:21.431543  161014 type.go:168] "Request Body" body=""
	I1009 19:08:21.431622  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:21.432021  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:21.931880  161014 type.go:168] "Request Body" body=""
	I1009 19:08:21.931973  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:21.932344  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:21.932455  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:22.431220  161014 type.go:168] "Request Body" body=""
	I1009 19:08:22.431312  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:22.431735  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:22.931611  161014 type.go:168] "Request Body" body=""
	I1009 19:08:22.931692  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:22.932047  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:23.430844  161014 type.go:168] "Request Body" body=""
	I1009 19:08:23.430928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:23.431339  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:23.931177  161014 type.go:168] "Request Body" body=""
	I1009 19:08:23.931280  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:23.931703  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:24.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:08:24.431623  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:24.432029  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:24.432099  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:24.930846  161014 type.go:168] "Request Body" body=""
	I1009 19:08:24.930940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:24.931301  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:25.431093  161014 type.go:168] "Request Body" body=""
	I1009 19:08:25.431180  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:25.431586  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:25.931364  161014 type.go:168] "Request Body" body=""
	I1009 19:08:25.931490  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:25.931848  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:26.431757  161014 type.go:168] "Request Body" body=""
	I1009 19:08:26.431844  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:26.432286  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:26.432356  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:26.931111  161014 type.go:168] "Request Body" body=""
	I1009 19:08:26.931219  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:26.931654  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:27.431562  161014 type.go:168] "Request Body" body=""
	I1009 19:08:27.431657  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:27.432104  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:27.931917  161014 type.go:168] "Request Body" body=""
	I1009 19:08:27.932031  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:27.932479  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:28.431253  161014 type.go:168] "Request Body" body=""
	I1009 19:08:28.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:28.431741  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:28.931701  161014 type.go:168] "Request Body" body=""
	I1009 19:08:28.931793  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:28.932147  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:28.932231  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:29.430994  161014 type.go:168] "Request Body" body=""
	I1009 19:08:29.431076  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:29.431507  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:29.931284  161014 type.go:168] "Request Body" body=""
	I1009 19:08:29.931372  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:29.931786  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:30.431725  161014 type.go:168] "Request Body" body=""
	I1009 19:08:30.431807  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:30.432196  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:30.930995  161014 type.go:168] "Request Body" body=""
	I1009 19:08:30.931086  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:30.931489  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:31.431293  161014 type.go:168] "Request Body" body=""
	I1009 19:08:31.431407  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:31.431802  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:31.431899  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:31.931763  161014 type.go:168] "Request Body" body=""
	I1009 19:08:31.931847  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:31.932233  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:32.431064  161014 type.go:168] "Request Body" body=""
	I1009 19:08:32.431143  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:32.431569  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:32.931367  161014 type.go:168] "Request Body" body=""
	I1009 19:08:32.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:32.931834  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:33.431666  161014 type.go:168] "Request Body" body=""
	I1009 19:08:33.431746  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:33.432152  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:33.432228  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:33.931085  161014 type.go:168] "Request Body" body=""
	I1009 19:08:33.931187  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:33.931603  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:34.431399  161014 type.go:168] "Request Body" body=""
	I1009 19:08:34.431485  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:34.431891  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:34.931782  161014 type.go:168] "Request Body" body=""
	I1009 19:08:34.931877  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:34.932244  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:35.431033  161014 type.go:168] "Request Body" body=""
	I1009 19:08:35.431120  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:35.431472  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:35.931247  161014 type.go:168] "Request Body" body=""
	I1009 19:08:35.931336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:35.931759  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:35.931829  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:36.431682  161014 type.go:168] "Request Body" body=""
	I1009 19:08:36.431785  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:36.432193  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:36.931013  161014 type.go:168] "Request Body" body=""
	I1009 19:08:36.931099  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:36.931470  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:37.431265  161014 type.go:168] "Request Body" body=""
	I1009 19:08:37.431370  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:37.431819  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:37.931612  161014 type.go:168] "Request Body" body=""
	I1009 19:08:37.931700  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:37.932079  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:37.932145  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:38.430913  161014 type.go:168] "Request Body" body=""
	I1009 19:08:38.431022  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:38.431519  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:38.931233  161014 type.go:168] "Request Body" body=""
	I1009 19:08:38.931319  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:38.931686  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:39.431521  161014 type.go:168] "Request Body" body=""
	I1009 19:08:39.431627  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:39.432049  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:39.931904  161014 type.go:168] "Request Body" body=""
	I1009 19:08:39.932008  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:39.932353  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:39.932451  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:40.431183  161014 type.go:168] "Request Body" body=""
	I1009 19:08:40.431282  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:40.431716  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:40.931624  161014 type.go:168] "Request Body" body=""
	I1009 19:08:40.931713  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:40.932079  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:41.430889  161014 type.go:168] "Request Body" body=""
	I1009 19:08:41.430987  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:41.431423  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:41.931240  161014 type.go:168] "Request Body" body=""
	I1009 19:08:41.931324  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:41.931700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:42.431534  161014 type.go:168] "Request Body" body=""
	I1009 19:08:42.431639  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:42.432064  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:42.432142  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:42.930885  161014 type.go:168] "Request Body" body=""
	I1009 19:08:42.930975  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:42.931354  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:43.431227  161014 type.go:168] "Request Body" body=""
	I1009 19:08:43.431323  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:43.431715  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:43.931552  161014 type.go:168] "Request Body" body=""
	I1009 19:08:43.931632  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:43.931992  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:44.431828  161014 type.go:168] "Request Body" body=""
	I1009 19:08:44.431924  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:44.432325  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:44.432415  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:44.931136  161014 type.go:168] "Request Body" body=""
	I1009 19:08:44.931245  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:44.931664  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:45.431554  161014 type.go:168] "Request Body" body=""
	I1009 19:08:45.431649  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:45.432042  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:45.931929  161014 type.go:168] "Request Body" body=""
	I1009 19:08:45.932032  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:45.932456  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:46.431215  161014 type.go:168] "Request Body" body=""
	I1009 19:08:46.431303  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:46.431675  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:46.931516  161014 type.go:168] "Request Body" body=""
	I1009 19:08:46.931612  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:46.932033  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:46.932105  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:47.431930  161014 type.go:168] "Request Body" body=""
	I1009 19:08:47.432024  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:47.432404  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:47.931253  161014 type.go:168] "Request Body" body=""
	I1009 19:08:47.931351  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:47.931772  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:48.431679  161014 type.go:168] "Request Body" body=""
	I1009 19:08:48.431767  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:48.432147  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:48.930986  161014 type.go:168] "Request Body" body=""
	I1009 19:08:48.931073  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:48.931466  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:49.431246  161014 type.go:168] "Request Body" body=""
	I1009 19:08:49.431332  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:49.431709  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:49.431791  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:49.931583  161014 type.go:168] "Request Body" body=""
	I1009 19:08:49.931665  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:49.932043  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:50.430854  161014 type.go:168] "Request Body" body=""
	I1009 19:08:50.430942  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:50.431310  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:50.931059  161014 type.go:168] "Request Body" body=""
	I1009 19:08:50.931138  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:50.931534  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:51.431317  161014 type.go:168] "Request Body" body=""
	I1009 19:08:51.431423  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:51.431783  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:51.431860  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:51.931678  161014 type.go:168] "Request Body" body=""
	I1009 19:08:51.931770  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:51.932161  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:52.430940  161014 type.go:168] "Request Body" body=""
	I1009 19:08:52.431043  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:52.431471  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:52.931234  161014 type.go:168] "Request Body" body=""
	I1009 19:08:52.931317  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:52.931697  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:53.431539  161014 type.go:168] "Request Body" body=""
	I1009 19:08:53.431626  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:53.432040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:53.432105  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:53.931898  161014 type.go:168] "Request Body" body=""
	I1009 19:08:53.931980  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:53.932334  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:54.431123  161014 type.go:168] "Request Body" body=""
	I1009 19:08:54.431206  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:54.431572  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:54.931007  161014 type.go:168] "Request Body" body=""
	I1009 19:08:54.931094  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:54.931473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:55.431255  161014 type.go:168] "Request Body" body=""
	I1009 19:08:55.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:55.431719  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:55.931595  161014 type.go:168] "Request Body" body=""
	I1009 19:08:55.931684  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:55.932059  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:55.932132  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:56.430905  161014 type.go:168] "Request Body" body=""
	I1009 19:08:56.430996  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:56.431358  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:56.931139  161014 type.go:168] "Request Body" body=""
	I1009 19:08:56.931225  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:56.931614  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:57.431422  161014 type.go:168] "Request Body" body=""
	I1009 19:08:57.431520  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:57.431890  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:57.931717  161014 type.go:168] "Request Body" body=""
	I1009 19:08:57.931804  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:57.932243  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:57.932309  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:58.431442  161014 type.go:168] "Request Body" body=""
	I1009 19:08:58.431719  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:58.432305  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:58.931643  161014 type.go:168] "Request Body" body=""
	I1009 19:08:58.931736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:58.932089  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:59.431793  161014 type.go:168] "Request Body" body=""
	I1009 19:08:59.431868  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:59.432216  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:59.931889  161014 type.go:168] "Request Body" body=""
	I1009 19:08:59.931982  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:59.932322  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:59.932430  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:00.430938  161014 type.go:168] "Request Body" body=""
	I1009 19:09:00.431025  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:00.431413  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:00.930953  161014 type.go:168] "Request Body" body=""
	I1009 19:09:00.931042  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:00.931443  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:01.431021  161014 type.go:168] "Request Body" body=""
	I1009 19:09:01.431114  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:01.431513  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:01.931074  161014 type.go:168] "Request Body" body=""
	I1009 19:09:01.931154  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:01.931545  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:02.431340  161014 type.go:168] "Request Body" body=""
	I1009 19:09:02.431449  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:02.431830  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:02.431902  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:02.931823  161014 type.go:168] "Request Body" body=""
	I1009 19:09:02.931913  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:02.932314  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:03.431114  161014 type.go:168] "Request Body" body=""
	I1009 19:09:03.431193  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:03.431578  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:03.931464  161014 type.go:168] "Request Body" body=""
	I1009 19:09:03.931552  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:03.931986  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:04.431831  161014 type.go:168] "Request Body" body=""
	I1009 19:09:04.431934  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:04.432314  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:04.432398  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:04.931129  161014 type.go:168] "Request Body" body=""
	I1009 19:09:04.931216  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:04.931674  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:05.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:09:05.431611  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:05.432021  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:05.931854  161014 type.go:168] "Request Body" body=""
	I1009 19:09:05.931940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:05.932334  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:06.431088  161014 type.go:168] "Request Body" body=""
	I1009 19:09:06.431167  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:06.431515  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:06.931278  161014 type.go:168] "Request Body" body=""
	I1009 19:09:06.931362  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:06.931748  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:06.931816  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:07.431644  161014 type.go:168] "Request Body" body=""
	I1009 19:09:07.431747  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:07.432178  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:07.931866  161014 type.go:168] "Request Body" body=""
	I1009 19:09:07.931950  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:07.932290  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:08.431090  161014 type.go:168] "Request Body" body=""
	I1009 19:09:08.431172  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:08.431650  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:08.931429  161014 type.go:168] "Request Body" body=""
	I1009 19:09:08.931507  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:08.931843  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:08.931909  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:09.431805  161014 type.go:168] "Request Body" body=""
	I1009 19:09:09.431897  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:09.432328  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:09.931113  161014 type.go:168] "Request Body" body=""
	I1009 19:09:09.931194  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:09.931569  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:10.431340  161014 type.go:168] "Request Body" body=""
	I1009 19:09:10.431473  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:10.431864  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:10.931696  161014 type.go:168] "Request Body" body=""
	I1009 19:09:10.931778  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:10.932062  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:10.932116  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:11.430855  161014 type.go:168] "Request Body" body=""
	I1009 19:09:11.430938  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:11.431371  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:11.931153  161014 type.go:168] "Request Body" body=""
	I1009 19:09:11.931230  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:11.931601  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:12.431453  161014 type.go:168] "Request Body" body=""
	I1009 19:09:12.431539  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:12.431968  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:12.931803  161014 type.go:168] "Request Body" body=""
	I1009 19:09:12.931890  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:12.932230  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:12.932299  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:13.431049  161014 type.go:168] "Request Body" body=""
	I1009 19:09:13.431141  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:13.431581  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:13.931422  161014 type.go:168] "Request Body" body=""
	I1009 19:09:13.931504  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:13.931849  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:14.431710  161014 type.go:168] "Request Body" body=""
	I1009 19:09:14.431803  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:14.432200  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:14.930978  161014 type.go:168] "Request Body" body=""
	I1009 19:09:14.931058  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:14.931421  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:15.431205  161014 type.go:168] "Request Body" body=""
	I1009 19:09:15.431290  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:15.431792  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:15.431868  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:15.931738  161014 type.go:168] "Request Body" body=""
	I1009 19:09:15.931822  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:15.932171  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:16.430949  161014 type.go:168] "Request Body" body=""
	I1009 19:09:16.431033  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:16.431370  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:16.931168  161014 type.go:168] "Request Body" body=""
	I1009 19:09:16.931244  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:16.931635  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:17.431446  161014 type.go:168] "Request Body" body=""
	I1009 19:09:17.431534  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:17.431909  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:17.431982  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:17.931495  161014 type.go:168] "Request Body" body=""
	I1009 19:09:17.931580  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:17.931927  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:18.431744  161014 type.go:168] "Request Body" body=""
	I1009 19:09:18.431828  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:18.432200  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:18.931151  161014 type.go:168] "Request Body" body=""
	I1009 19:09:18.931250  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:18.931652  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:19.431441  161014 type.go:168] "Request Body" body=""
	I1009 19:09:19.431529  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:19.431984  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:19.432070  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:19.931848  161014 type.go:168] "Request Body" body=""
	I1009 19:09:19.931941  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:19.932309  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:20.431088  161014 type.go:168] "Request Body" body=""
	I1009 19:09:20.431164  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:20.431555  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:20.931352  161014 type.go:168] "Request Body" body=""
	I1009 19:09:20.931455  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:20.931826  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:21.431728  161014 type.go:168] "Request Body" body=""
	I1009 19:09:21.431814  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:21.432175  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:21.432242  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:21.930958  161014 type.go:168] "Request Body" body=""
	I1009 19:09:21.931034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:21.931435  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:22.431185  161014 type.go:168] "Request Body" body=""
	I1009 19:09:22.431270  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:22.431704  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:22.931192  161014 type.go:168] "Request Body" body=""
	I1009 19:09:22.931273  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:22.931689  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:23.431502  161014 type.go:168] "Request Body" body=""
	I1009 19:09:23.431580  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:23.431996  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:23.930860  161014 type.go:168] "Request Body" body=""
	I1009 19:09:23.930955  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:23.931370  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:23.931478  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:24.431207  161014 type.go:168] "Request Body" body=""
	I1009 19:09:24.431286  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:24.431650  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:24.931519  161014 type.go:168] "Request Body" body=""
	I1009 19:09:24.931614  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:24.931998  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:25.431913  161014 type.go:168] "Request Body" body=""
	I1009 19:09:25.432001  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:25.432369  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:25.931194  161014 type.go:168] "Request Body" body=""
	I1009 19:09:25.931275  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:25.931711  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:25.931786  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:26.431609  161014 type.go:168] "Request Body" body=""
	I1009 19:09:26.431690  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:26.432050  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:26.931918  161014 type.go:168] "Request Body" body=""
	I1009 19:09:26.932020  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:26.932417  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:27.431187  161014 type.go:168] "Request Body" body=""
	I1009 19:09:27.431268  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:27.431666  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:27.931530  161014 type.go:168] "Request Body" body=""
	I1009 19:09:27.931614  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:27.931987  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:27.932055  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:28.431844  161014 type.go:168] "Request Body" body=""
	I1009 19:09:28.431933  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:28.432359  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:28.931165  161014 type.go:168] "Request Body" body=""
	I1009 19:09:28.931247  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:28.931680  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:29.431569  161014 type.go:168] "Request Body" body=""
	I1009 19:09:29.431650  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:29.432040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:29.931942  161014 type.go:168] "Request Body" body=""
	I1009 19:09:29.932027  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:29.932374  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:29.932460  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:30.431194  161014 type.go:168] "Request Body" body=""
	I1009 19:09:30.431282  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:30.431737  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:30.931616  161014 type.go:168] "Request Body" body=""
	I1009 19:09:30.931725  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:30.932121  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:31.430987  161014 type.go:168] "Request Body" body=""
	I1009 19:09:31.431078  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:31.431478  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:31.931232  161014 type.go:168] "Request Body" body=""
	I1009 19:09:31.931315  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:31.931680  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:32.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:09:32.431613  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:32.431992  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:32.432063  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:32.931853  161014 type.go:168] "Request Body" body=""
	I1009 19:09:32.931950  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:32.932297  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:33.431044  161014 type.go:168] "Request Body" body=""
	I1009 19:09:33.431132  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:33.431543  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:33.931355  161014 type.go:168] "Request Body" body=""
	I1009 19:09:33.931458  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:33.931818  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:34.431650  161014 type.go:168] "Request Body" body=""
	I1009 19:09:34.431733  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:34.432148  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:34.432213  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:34.930967  161014 type.go:168] "Request Body" body=""
	I1009 19:09:34.931063  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:34.931473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:35.431283  161014 type.go:168] "Request Body" body=""
	I1009 19:09:35.431373  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:35.431779  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:35.931628  161014 type.go:168] "Request Body" body=""
	I1009 19:09:35.931736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:35.932084  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:36.430910  161014 type.go:168] "Request Body" body=""
	I1009 19:09:36.431012  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:36.431444  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:36.931340  161014 type.go:168] "Request Body" body=""
	I1009 19:09:36.931451  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:36.931825  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:36.931893  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:37.431740  161014 type.go:168] "Request Body" body=""
	I1009 19:09:37.431822  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:37.432174  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:37.931117  161014 type.go:168] "Request Body" body=""
	I1009 19:09:37.931218  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:37.931587  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:38.431359  161014 type.go:168] "Request Body" body=""
	I1009 19:09:38.431467  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:38.431870  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:38.931821  161014 type.go:168] "Request Body" body=""
	I1009 19:09:38.931902  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:38.932265  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:38.932358  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:39.431099  161014 type.go:168] "Request Body" body=""
	I1009 19:09:39.431179  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:39.431570  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:39.931428  161014 type.go:168] "Request Body" body=""
	I1009 19:09:39.931517  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:39.931883  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:40.431747  161014 type.go:168] "Request Body" body=""
	I1009 19:09:40.431830  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:40.432201  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:40.931026  161014 type.go:168] "Request Body" body=""
	I1009 19:09:40.931135  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:40.931594  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:41.431370  161014 type.go:168] "Request Body" body=""
	I1009 19:09:41.431476  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:41.431863  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:41.431951  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:41.931795  161014 type.go:168] "Request Body" body=""
	I1009 19:09:41.931873  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:41.932227  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:42.431026  161014 type.go:168] "Request Body" body=""
	I1009 19:09:42.431112  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:42.431474  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:42.931233  161014 type.go:168] "Request Body" body=""
	I1009 19:09:42.931308  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:42.931720  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:43.431625  161014 type.go:168] "Request Body" body=""
	I1009 19:09:43.431708  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:43.432076  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:43.432152  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:43.930876  161014 type.go:168] "Request Body" body=""
	I1009 19:09:43.930965  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:43.931363  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:44.431159  161014 type.go:168] "Request Body" body=""
	I1009 19:09:44.431252  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:44.431660  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:44.931539  161014 type.go:168] "Request Body" body=""
	I1009 19:09:44.931619  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:44.932022  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:45.431848  161014 type.go:168] "Request Body" body=""
	I1009 19:09:45.431928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:45.432294  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:45.432362  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:45.931071  161014 type.go:168] "Request Body" body=""
	I1009 19:09:45.931154  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:45.931550  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:46.431330  161014 type.go:168] "Request Body" body=""
	I1009 19:09:46.431433  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:46.431785  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:46.931618  161014 type.go:168] "Request Body" body=""
	I1009 19:09:46.931717  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:46.932083  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:47.430878  161014 type.go:168] "Request Body" body=""
	I1009 19:09:47.430967  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:47.431308  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:47.931113  161014 type.go:168] "Request Body" body=""
	I1009 19:09:47.931193  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:47.931575  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:47.931645  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:48.431350  161014 type.go:168] "Request Body" body=""
	I1009 19:09:48.431448  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:48.431909  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:48.931846  161014 type.go:168] "Request Body" body=""
	I1009 19:09:48.931928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:48.932292  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:49.431050  161014 type.go:168] "Request Body" body=""
	I1009 19:09:49.431125  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:49.431508  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:49.931265  161014 type.go:168] "Request Body" body=""
	I1009 19:09:49.931345  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:49.931748  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:49.931814  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:50.431652  161014 type.go:168] "Request Body" body=""
	I1009 19:09:50.431747  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:50.432126  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:50.930878  161014 type.go:168] "Request Body" body=""
	I1009 19:09:50.930959  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:50.931336  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:51.431163  161014 type.go:168] "Request Body" body=""
	I1009 19:09:51.431258  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:51.431622  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:51.931358  161014 type.go:168] "Request Body" body=""
	I1009 19:09:51.931464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:51.931852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:51.931924  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:52.431703  161014 type.go:168] "Request Body" body=""
	I1009 19:09:52.431795  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:52.432179  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:52.930954  161014 type.go:168] "Request Body" body=""
	I1009 19:09:52.931050  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:52.931459  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:53.431224  161014 type.go:168] "Request Body" body=""
	I1009 19:09:53.431365  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:53.431740  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:53.931748  161014 type.go:168] "Request Body" body=""
	I1009 19:09:53.931831  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:53.932191  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:53.932260  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:54.430975  161014 type.go:168] "Request Body" body=""
	I1009 19:09:54.431053  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:54.431476  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:54.931249  161014 type.go:168] "Request Body" body=""
	I1009 19:09:54.931341  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:54.931729  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:55.431607  161014 type.go:168] "Request Body" body=""
	I1009 19:09:55.431691  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:55.432110  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:55.930917  161014 type.go:168] "Request Body" body=""
	I1009 19:09:55.931003  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:55.931362  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:56.431145  161014 type.go:168] "Request Body" body=""
	I1009 19:09:56.431222  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:56.431639  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:56.431710  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:56.931556  161014 type.go:168] "Request Body" body=""
	I1009 19:09:56.931656  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:56.932062  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:57.431903  161014 type.go:168] "Request Body" body=""
	I1009 19:09:57.431989  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:57.432337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:57.931402  161014 type.go:168] "Request Body" body=""
	I1009 19:09:57.931482  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:57.931843  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:58.431701  161014 type.go:168] "Request Body" body=""
	I1009 19:09:58.431790  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:58.432145  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:58.432218  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:58.931088  161014 type.go:168] "Request Body" body=""
	I1009 19:09:58.931175  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:58.931505  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:59.431298  161014 type.go:168] "Request Body" body=""
	I1009 19:09:59.431395  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:59.431751  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:59.931602  161014 type.go:168] "Request Body" body=""
	I1009 19:09:59.931702  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:59.932051  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:00.430856  161014 type.go:168] "Request Body" body=""
	I1009 19:10:00.430958  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:00.431337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:00.931121  161014 type.go:168] "Request Body" body=""
	I1009 19:10:00.931220  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:00.931593  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:00.931674  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:01.431423  161014 type.go:168] "Request Body" body=""
	I1009 19:10:01.431509  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:01.431863  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:01.931614  161014 type.go:168] "Request Body" body=""
	I1009 19:10:01.931705  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:01.932079  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:02.430870  161014 type.go:168] "Request Body" body=""
	I1009 19:10:02.430952  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:02.431333  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:02.931135  161014 type.go:168] "Request Body" body=""
	I1009 19:10:02.931235  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:02.931629  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:02.931714  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:03.431520  161014 type.go:168] "Request Body" body=""
	I1009 19:10:03.431673  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:03.432032  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:03.930864  161014 type.go:168] "Request Body" body=""
	I1009 19:10:03.930947  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:03.931344  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:04.431204  161014 type.go:168] "Request Body" body=""
	I1009 19:10:04.431296  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:04.431704  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:04.931600  161014 type.go:168] "Request Body" body=""
	I1009 19:10:04.931678  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:04.932040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:04.932106  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:05.430899  161014 type.go:168] "Request Body" body=""
	I1009 19:10:05.431003  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:05.431407  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:05.931195  161014 type.go:168] "Request Body" body=""
	I1009 19:10:05.931270  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:05.931635  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:06.431451  161014 type.go:168] "Request Body" body=""
	I1009 19:10:06.431534  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:06.431953  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:06.931837  161014 type.go:168] "Request Body" body=""
	I1009 19:10:06.931927  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:06.932279  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:06.932345  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:07.431076  161014 type.go:168] "Request Body" body=""
	I1009 19:10:07.431164  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:07.431506  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:07.931394  161014 type.go:168] "Request Body" body=""
	I1009 19:10:07.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:07.931835  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:08.431660  161014 type.go:168] "Request Body" body=""
	I1009 19:10:08.431741  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:08.432102  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:08.930920  161014 type.go:168] "Request Body" body=""
	I1009 19:10:08.930998  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:08.931426  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:09.431179  161014 type.go:168] "Request Body" body=""
	I1009 19:10:09.431260  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:09.431640  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:09.431713  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:09.931551  161014 type.go:168] "Request Body" body=""
	I1009 19:10:09.931636  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:09.932085  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:10.430911  161014 type.go:168] "Request Body" body=""
	I1009 19:10:10.431004  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:10.431408  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:10.931180  161014 type.go:168] "Request Body" body=""
	I1009 19:10:10.931260  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:10.931651  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:11.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:10:11.431610  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:11.432017  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:11.432093  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:11.930846  161014 type.go:168] "Request Body" body=""
	I1009 19:10:11.930928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:11.931300  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:12.431099  161014 type.go:168] "Request Body" body=""
	I1009 19:10:12.431188  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:12.431615  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:12.931577  161014 type.go:168] "Request Body" body=""
	I1009 19:10:12.931661  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:12.932029  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:13.431910  161014 type.go:168] "Request Body" body=""
	I1009 19:10:13.431995  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:13.432350  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:13.432438  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:13.931217  161014 type.go:168] "Request Body" body=""
	I1009 19:10:13.931302  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:13.931678  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:14.431548  161014 type.go:168] "Request Body" body=""
	I1009 19:10:14.431638  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:14.432110  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:14.930876  161014 type.go:168] "Request Body" body=""
	I1009 19:10:14.930963  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:14.931343  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:15.431156  161014 type.go:168] "Request Body" body=""
	I1009 19:10:15.431244  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:15.431618  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:15.931358  161014 type.go:168] "Request Body" body=""
	I1009 19:10:15.931451  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:15.931817  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:15.931883  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:16.431696  161014 type.go:168] "Request Body" body=""
	I1009 19:10:16.431794  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:16.432158  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:16.930930  161014 type.go:168] "Request Body" body=""
	I1009 19:10:16.931010  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:16.931370  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:17.431200  161014 type.go:168] "Request Body" body=""
	I1009 19:10:17.431290  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:17.431663  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:17.931525  161014 type.go:168] "Request Body" body=""
	I1009 19:10:17.931613  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:17.932012  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:17.932077  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:18.431980  161014 type.go:168] "Request Body" body=""
	I1009 19:10:18.432065  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:18.432498  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:18.931327  161014 type.go:168] "Request Body" body=""
	I1009 19:10:18.931435  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:18.931798  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:19.431649  161014 type.go:168] "Request Body" body=""
	I1009 19:10:19.431736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:19.432133  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:19.930941  161014 type.go:168] "Request Body" body=""
	I1009 19:10:19.931034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:19.931426  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:20.431191  161014 type.go:168] "Request Body" body=""
	I1009 19:10:20.431277  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:20.431702  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:20.431786  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:20.931649  161014 type.go:168] "Request Body" body=""
	I1009 19:10:20.931743  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:20.932145  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:21.430998  161014 type.go:168] "Request Body" body=""
	I1009 19:10:21.431093  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:21.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:21.931294  161014 type.go:168] "Request Body" body=""
	I1009 19:10:21.931404  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:21.931769  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:22.431592  161014 type.go:168] "Request Body" body=""
	I1009 19:10:22.431689  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:22.432061  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:22.432138  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:22.930890  161014 type.go:168] "Request Body" body=""
	I1009 19:10:22.930981  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:22.931355  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:23.431123  161014 type.go:168] "Request Body" body=""
	I1009 19:10:23.431202  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:23.431562  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:23.931393  161014 type.go:168] "Request Body" body=""
	I1009 19:10:23.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:23.931849  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:24.431681  161014 type.go:168] "Request Body" body=""
	I1009 19:10:24.431765  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:24.432120  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:24.432200  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:24.930948  161014 type.go:168] "Request Body" body=""
	I1009 19:10:24.931038  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:24.931411  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:25.431172  161014 type.go:168] "Request Body" body=""
	I1009 19:10:25.431263  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:25.431645  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:25.931517  161014 type.go:168] "Request Body" body=""
	I1009 19:10:25.931604  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:25.931950  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:26.431795  161014 type.go:168] "Request Body" body=""
	I1009 19:10:26.431877  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:26.432259  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:26.432327  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:26.931108  161014 type.go:168] "Request Body" body=""
	I1009 19:10:26.931192  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:26.931561  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:27.431372  161014 type.go:168] "Request Body" body=""
	I1009 19:10:27.431478  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:27.431852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:27.931767  161014 type.go:168] "Request Body" body=""
	I1009 19:10:27.931844  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:27.932243  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:28.431036  161014 type.go:168] "Request Body" body=""
	I1009 19:10:28.431114  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:28.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:28.931317  161014 type.go:168] "Request Body" body=""
	I1009 19:10:28.931415  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:28.931802  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:28.931870  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:29.431682  161014 type.go:168] "Request Body" body=""
	I1009 19:10:29.431764  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:29.432158  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:29.930948  161014 type.go:168] "Request Body" body=""
	I1009 19:10:29.931029  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:29.931432  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:30.431237  161014 type.go:168] "Request Body" body=""
	I1009 19:10:30.431318  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:30.431700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:30.931592  161014 type.go:168] "Request Body" body=""
	I1009 19:10:30.931686  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:30.932066  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:30.932138  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:31.430865  161014 type.go:168] "Request Body" body=""
	I1009 19:10:31.430944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:31.431326  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:31.931100  161014 type.go:168] "Request Body" body=""
	I1009 19:10:31.931183  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:31.931557  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:32.431408  161014 type.go:168] "Request Body" body=""
	I1009 19:10:32.431492  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:32.431860  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:32.931727  161014 type.go:168] "Request Body" body=""
	I1009 19:10:32.931827  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:32.932201  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:32.932275  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:33.431035  161014 type.go:168] "Request Body" body=""
	I1009 19:10:33.431127  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:33.431509  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:33.931347  161014 type.go:168] "Request Body" body=""
	I1009 19:10:33.931452  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:33.931805  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:34.431659  161014 type.go:168] "Request Body" body=""
	I1009 19:10:34.431767  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:34.432157  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:34.930935  161014 type.go:168] "Request Body" body=""
	I1009 19:10:34.931032  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:34.931422  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:35.431188  161014 type.go:168] "Request Body" body=""
	I1009 19:10:35.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:35.431638  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:35.431700  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:35.931496  161014 type.go:168] "Request Body" body=""
	I1009 19:10:35.931583  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:35.931982  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:36.431843  161014 type.go:168] "Request Body" body=""
	I1009 19:10:36.431930  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:36.432287  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:36.931012  161014 type.go:168] "Request Body" body=""
	I1009 19:10:36.931101  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:36.931479  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:37.431254  161014 type.go:168] "Request Body" body=""
	I1009 19:10:37.431336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:37.431708  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:37.431785  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:37.931498  161014 type.go:168] "Request Body" body=""
	I1009 19:10:37.931578  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:37.931952  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:38.431802  161014 type.go:168] "Request Body" body=""
	I1009 19:10:38.431878  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:38.432242  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:38.931094  161014 type.go:168] "Request Body" body=""
	I1009 19:10:38.931171  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:38.931535  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:39.431342  161014 type.go:168] "Request Body" body=""
	I1009 19:10:39.431467  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:39.431828  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:39.431894  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:39.931678  161014 type.go:168] "Request Body" body=""
	I1009 19:10:39.931769  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:39.932114  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:40.430894  161014 type.go:168] "Request Body" body=""
	I1009 19:10:40.431002  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:40.431338  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:40.931086  161014 type.go:168] "Request Body" body=""
	I1009 19:10:40.931169  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:40.931549  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:41.431354  161014 type.go:168] "Request Body" body=""
	I1009 19:10:41.431484  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:41.431936  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:41.432009  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:41.931856  161014 type.go:168] "Request Body" body=""
	I1009 19:10:41.931944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:41.932342  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:42.431239  161014 type.go:168] "Request Body" body=""
	I1009 19:10:42.431343  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:42.431776  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:42.931642  161014 type.go:168] "Request Body" body=""
	I1009 19:10:42.931724  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:42.932139  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:43.430955  161014 type.go:168] "Request Body" body=""
	I1009 19:10:43.431055  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:43.431482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:43.931286  161014 type.go:168] "Request Body" body=""
	I1009 19:10:43.931364  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:43.931761  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:43.931841  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:44.431651  161014 type.go:168] "Request Body" body=""
	I1009 19:10:44.431739  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:44.432136  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:44.930918  161014 type.go:168] "Request Body" body=""
	I1009 19:10:44.930997  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:44.931368  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:45.431210  161014 type.go:168] "Request Body" body=""
	I1009 19:10:45.431301  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:45.431803  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:45.931785  161014 type.go:168] "Request Body" body=""
	I1009 19:10:45.931879  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:45.932234  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:45.932298  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:46.431044  161014 type.go:168] "Request Body" body=""
	I1009 19:10:46.431130  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:46.431509  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:46.931298  161014 type.go:168] "Request Body" body=""
	I1009 19:10:46.931409  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:46.931768  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:47.431684  161014 type.go:168] "Request Body" body=""
	I1009 19:10:47.431772  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:47.432192  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:47.930892  161014 type.go:168] "Request Body" body=""
	I1009 19:10:47.931082  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:47.931491  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:48.431254  161014 type.go:168] "Request Body" body=""
	I1009 19:10:48.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:48.431754  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:48.431817  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:48.931519  161014 type.go:168] "Request Body" body=""
	I1009 19:10:48.931605  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:48.931963  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:49.431903  161014 type.go:168] "Request Body" body=""
	I1009 19:10:49.431995  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:49.432442  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:49.931216  161014 type.go:168] "Request Body" body=""
	I1009 19:10:49.931299  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:49.931685  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:50.431513  161014 type.go:168] "Request Body" body=""
	I1009 19:10:50.431600  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:50.432015  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:50.432094  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:50.931903  161014 type.go:168] "Request Body" body=""
	I1009 19:10:50.931985  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:50.932356  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:51.431151  161014 type.go:168] "Request Body" body=""
	I1009 19:10:51.431235  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:51.431691  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:51.931607  161014 type.go:168] "Request Body" body=""
	I1009 19:10:51.931704  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:51.932066  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:52.430855  161014 type.go:168] "Request Body" body=""
	I1009 19:10:52.430936  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:52.431352  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:52.931144  161014 type.go:168] "Request Body" body=""
	I1009 19:10:52.931236  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:52.931629  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:52.931694  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:53.431504  161014 type.go:168] "Request Body" body=""
	I1009 19:10:53.431592  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:53.431978  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:53.930879  161014 type.go:168] "Request Body" body=""
	I1009 19:10:53.930990  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:53.931420  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:54.431176  161014 type.go:168] "Request Body" body=""
	I1009 19:10:54.431256  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:54.431696  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:54.931517  161014 type.go:168] "Request Body" body=""
	I1009 19:10:54.931600  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:54.932006  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:54.932070  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:55.431919  161014 type.go:168] "Request Body" body=""
	I1009 19:10:55.432013  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:55.432499  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:55.931252  161014 type.go:168] "Request Body" body=""
	I1009 19:10:55.931340  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:55.931770  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:56.431601  161014 type.go:168] "Request Body" body=""
	I1009 19:10:56.431705  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:56.432063  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:56.930857  161014 type.go:168] "Request Body" body=""
	I1009 19:10:56.930944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:56.931308  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:57.431063  161014 type.go:168] "Request Body" body=""
	I1009 19:10:57.431152  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:57.431557  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:57.431627  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:57.931435  161014 type.go:168] "Request Body" body=""
	I1009 19:10:57.931520  161014 node_ready.go:38] duration metric: took 6m0.000788191s for node "functional-158523" to be "Ready" ...
	I1009 19:10:57.934316  161014 out.go:203] 
	W1009 19:10:57.935818  161014 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:10:57.935834  161014 out.go:285] * 
	* 
	W1009 19:10:57.937485  161014 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:10:57.938875  161014 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-158523 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m4.605390738s for "functional-158523" cluster.
I1009 19:10:58.419033  141519 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-158523
helpers_test.go:243: (dbg) docker inspect functional-158523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	        "Created": "2025-10-09T18:56:39.519997973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:56:39.555178961Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hosts",
	        "LogPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4-json.log",
	        "Name": "/functional-158523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-158523:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-158523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	                "LowerDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-158523",
	                "Source": "/var/lib/docker/volumes/functional-158523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-158523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-158523",
	                "name.minikube.sigs.k8s.io": "functional-158523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50f61b8c99f766763e9e30d37584e537ddf10f74215a382a4221cbe7f7c2e821",
	            "SandboxKey": "/var/run/docker/netns/50f61b8c99f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-158523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:a1:34:24:78:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6347eb7b6b2842e266f51d3873c8aa169a4287ea52510c3d62dc4ed41012963c",
	                    "EndpointID": "eb1ada23cbc058d52037dd94574627dd1b0fedf455f291e656f050bf06881952",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-158523",
	                        "dea3735c567c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523: exit status 2 (328.941264ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-158523 logs -n 25: (1.009509605s)
helpers_test.go:260: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-484045                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-484045   │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ start   │ --download-only -p download-docker-070263 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-070263 │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ delete  │ -p download-docker-070263                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-070263 │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ start   │ --download-only -p binary-mirror-721152 --alsologtostderr --binary-mirror http://127.0.0.1:36453 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-721152   │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ delete  │ -p binary-mirror-721152                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-721152   │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ addons  │ disable dashboard -p addons-139298                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-139298          │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ addons  │ enable dashboard -p addons-139298                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-139298          │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ start   │ -p addons-139298 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-139298          │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ delete  │ -p addons-139298                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-139298          │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │ 09 Oct 25 18:48 UTC │
	│ start   │ -p nospam-656427 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-656427 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │                     │
	│ start   │ nospam-656427 --log_dir /tmp/nospam-656427 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ nospam-656427 --log_dir /tmp/nospam-656427 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ nospam-656427 --log_dir /tmp/nospam-656427 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ pause   │ nospam-656427 --log_dir /tmp/nospam-656427 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ pause   │ nospam-656427 --log_dir /tmp/nospam-656427 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ pause   │ nospam-656427 --log_dir /tmp/nospam-656427 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ delete  │ -p nospam-656427                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p functional-158523 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-158523      │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ -p functional-158523 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-158523      │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:04:53
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:04:53.859600  161014 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:04:53.859894  161014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:53.859904  161014 out.go:374] Setting ErrFile to fd 2...
	I1009 19:04:53.859909  161014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:53.860103  161014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:04:53.860622  161014 out.go:368] Setting JSON to false
	I1009 19:04:53.861569  161014 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2843,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:04:53.861680  161014 start.go:143] virtualization: kvm guest
	I1009 19:04:53.864538  161014 out.go:179] * [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:04:53.866020  161014 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:04:53.866041  161014 notify.go:221] Checking for updates...
	I1009 19:04:53.868520  161014 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:04:53.869799  161014 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:53.871001  161014 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:04:53.872350  161014 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:04:53.873695  161014 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:04:53.875515  161014 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:53.875628  161014 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:04:53.899122  161014 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:04:53.899239  161014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:04:53.961702  161014 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:04:53.950772825 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:04:53.961810  161014 docker.go:319] overlay module found
	I1009 19:04:53.963901  161014 out.go:179] * Using the docker driver based on existing profile
	I1009 19:04:53.965359  161014 start.go:309] selected driver: docker
	I1009 19:04:53.965397  161014 start.go:930] validating driver "docker" against &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:04:53.965505  161014 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:04:53.965601  161014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:04:54.024534  161014 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:04:54.014787007 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:04:54.025138  161014 cni.go:84] Creating CNI manager for ""
	I1009 19:04:54.025189  161014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:04:54.025246  161014 start.go:353] cluster config:
	{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:04:54.027519  161014 out.go:179] * Starting "functional-158523" primary control-plane node in "functional-158523" cluster
	I1009 19:04:54.028967  161014 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:04:54.030473  161014 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:04:54.031821  161014 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:04:54.031876  161014 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:04:54.031885  161014 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:04:54.031986  161014 cache.go:58] Caching tarball of preloaded images
	I1009 19:04:54.032085  161014 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:04:54.032098  161014 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:04:54.032213  161014 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/config.json ...
	I1009 19:04:54.053026  161014 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:04:54.053045  161014 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:04:54.053063  161014 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:04:54.053096  161014 start.go:361] acquireMachinesLock for functional-158523: {Name:mk995713bbd40419f859c4a8640c8ada0479020c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:04:54.053186  161014 start.go:365] duration metric: took 46.429µs to acquireMachinesLock for "functional-158523"
	I1009 19:04:54.053209  161014 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:04:54.053220  161014 fix.go:55] fixHost starting: 
	I1009 19:04:54.053511  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:54.070674  161014 fix.go:113] recreateIfNeeded on functional-158523: state=Running err=<nil>
	W1009 19:04:54.070714  161014 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:04:54.072611  161014 out.go:252] * Updating the running docker "functional-158523" container ...
	I1009 19:04:54.072644  161014 machine.go:93] provisionDockerMachine start ...
	I1009 19:04:54.072732  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:54.089158  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:54.089398  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:54.089417  161014 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:04:54.234516  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 19:04:54.234543  161014 ubuntu.go:182] provisioning hostname "functional-158523"
	I1009 19:04:54.234606  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:54.252690  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:54.252942  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:54.252960  161014 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-158523 && echo "functional-158523" | sudo tee /etc/hostname
	I1009 19:04:54.409130  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 19:04:54.409240  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:54.428592  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:54.428819  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:54.428839  161014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-158523' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-158523/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-158523' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:04:54.575221  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:04:54.575248  161014 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:04:54.575298  161014 ubuntu.go:190] setting up certificates
	I1009 19:04:54.575313  161014 provision.go:84] configureAuth start
	I1009 19:04:54.575366  161014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 19:04:54.593157  161014 provision.go:143] copyHostCerts
	I1009 19:04:54.593200  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:04:54.593229  161014 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:04:54.593244  161014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:04:54.593315  161014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:04:54.593491  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:04:54.593517  161014 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:04:54.593524  161014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:04:54.593557  161014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:04:54.593615  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:04:54.593632  161014 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:04:54.593638  161014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:04:54.593693  161014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:04:54.593752  161014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.functional-158523 san=[127.0.0.1 192.168.49.2 functional-158523 localhost minikube]
	I1009 19:04:54.998231  161014 provision.go:177] copyRemoteCerts
	I1009 19:04:54.998297  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:04:54.998335  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.016505  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.120020  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:04:55.120077  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:04:55.138116  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:04:55.138187  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:04:55.157031  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:04:55.157100  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:04:55.176045  161014 provision.go:87] duration metric: took 600.715143ms to configureAuth
	I1009 19:04:55.176080  161014 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:04:55.176245  161014 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:55.176357  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.194450  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:55.194679  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:55.194701  161014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:04:55.467764  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:04:55.467789  161014 machine.go:96] duration metric: took 1.395134259s to provisionDockerMachine
	I1009 19:04:55.467804  161014 start.go:294] postStartSetup for "functional-158523" (driver="docker")
	I1009 19:04:55.467821  161014 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:04:55.467882  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:04:55.467922  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.486353  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.591117  161014 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:04:55.594855  161014 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1009 19:04:55.594886  161014 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1009 19:04:55.594893  161014 command_runner.go:130] > VERSION_ID="12"
	I1009 19:04:55.594900  161014 command_runner.go:130] > VERSION="12 (bookworm)"
	I1009 19:04:55.594907  161014 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1009 19:04:55.594911  161014 command_runner.go:130] > ID=debian
	I1009 19:04:55.594915  161014 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1009 19:04:55.594920  161014 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1009 19:04:55.594926  161014 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1009 19:04:55.594992  161014 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:04:55.595011  161014 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:04:55.595023  161014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:04:55.595090  161014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:04:55.595204  161014 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:04:55.595227  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:04:55.595320  161014 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts -> hosts in /etc/test/nested/copy/141519
	I1009 19:04:55.595330  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts -> /etc/test/nested/copy/141519/hosts
	I1009 19:04:55.595388  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/141519
	I1009 19:04:55.603244  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:04:55.621701  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts --> /etc/test/nested/copy/141519/hosts (40 bytes)
	I1009 19:04:55.640532  161014 start.go:297] duration metric: took 172.708538ms for postStartSetup
	I1009 19:04:55.640625  161014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:04:55.640672  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.658424  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.758913  161014 command_runner.go:130] > 38%
	I1009 19:04:55.759004  161014 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:04:55.763762  161014 command_runner.go:130] > 182G
	I1009 19:04:55.763807  161014 fix.go:57] duration metric: took 1.710584464s for fixHost
	I1009 19:04:55.763821  161014 start.go:84] releasing machines lock for "functional-158523", held for 1.710622732s
	I1009 19:04:55.763882  161014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 19:04:55.781557  161014 ssh_runner.go:195] Run: cat /version.json
	I1009 19:04:55.781620  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.781568  161014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:04:55.781740  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.800026  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.800289  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.899840  161014 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1009 19:04:55.953125  161014 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 19:04:55.955421  161014 ssh_runner.go:195] Run: systemctl --version
	I1009 19:04:55.962169  161014 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1009 19:04:55.962207  161014 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1009 19:04:55.962422  161014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:04:56.001789  161014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 19:04:56.006364  161014 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 19:04:56.006710  161014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:04:56.006818  161014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:04:56.015207  161014 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:04:56.015234  161014 start.go:496] detecting cgroup driver to use...
	I1009 19:04:56.015270  161014 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:04:56.015326  161014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:04:56.030444  161014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:04:56.043355  161014 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:04:56.043439  161014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:04:56.058903  161014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:04:56.072794  161014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:04:56.155598  161014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:04:56.243484  161014 docker.go:234] disabling docker service ...
	I1009 19:04:56.243560  161014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:04:56.258472  161014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:04:56.271168  161014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:04:56.357916  161014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:04:56.444044  161014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:04:56.457436  161014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:04:56.471973  161014 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1009 19:04:56.472020  161014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:04:56.472074  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.481231  161014 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:04:56.481304  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.490735  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.499743  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.508857  161014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:04:56.517176  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.525878  161014 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.534146  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.542852  161014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:04:56.549944  161014 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 19:04:56.550015  161014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:04:56.557444  161014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:04:56.640120  161014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:04:56.755858  161014 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:04:56.755937  161014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:04:56.760115  161014 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1009 19:04:56.760139  161014 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 19:04:56.760145  161014 command_runner.go:130] > Device: 0,59	Inode: 3908        Links: 1
	I1009 19:04:56.760152  161014 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 19:04:56.760157  161014 command_runner.go:130] > Access: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760162  161014 command_runner.go:130] > Modify: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760167  161014 command_runner.go:130] > Change: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760171  161014 command_runner.go:130] >  Birth: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760191  161014 start.go:564] Will wait 60s for crictl version
	I1009 19:04:56.760238  161014 ssh_runner.go:195] Run: which crictl
	I1009 19:04:56.764068  161014 command_runner.go:130] > /usr/local/bin/crictl
	I1009 19:04:56.764145  161014 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:04:56.790045  161014 command_runner.go:130] > Version:  0.1.0
	I1009 19:04:56.790068  161014 command_runner.go:130] > RuntimeName:  cri-o
	I1009 19:04:56.790072  161014 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1009 19:04:56.790077  161014 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 19:04:56.790095  161014 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:04:56.790164  161014 ssh_runner.go:195] Run: crio --version
	I1009 19:04:56.817435  161014 command_runner.go:130] > crio version 1.34.1
	I1009 19:04:56.817460  161014 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 19:04:56.817466  161014 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 19:04:56.817470  161014 command_runner.go:130] >    GitTreeState:   dirty
	I1009 19:04:56.817475  161014 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 19:04:56.817480  161014 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 19:04:56.817483  161014 command_runner.go:130] >    Compiler:       gc
	I1009 19:04:56.817488  161014 command_runner.go:130] >    Platform:       linux/amd64
	I1009 19:04:56.817492  161014 command_runner.go:130] >    Linkmode:       static
	I1009 19:04:56.817496  161014 command_runner.go:130] >    BuildTags:
	I1009 19:04:56.817499  161014 command_runner.go:130] >      static
	I1009 19:04:56.817503  161014 command_runner.go:130] >      netgo
	I1009 19:04:56.817506  161014 command_runner.go:130] >      osusergo
	I1009 19:04:56.817510  161014 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 19:04:56.817514  161014 command_runner.go:130] >      seccomp
	I1009 19:04:56.817518  161014 command_runner.go:130] >      apparmor
	I1009 19:04:56.817521  161014 command_runner.go:130] >      selinux
	I1009 19:04:56.817525  161014 command_runner.go:130] >    LDFlags:          unknown
	I1009 19:04:56.817531  161014 command_runner.go:130] >    SeccompEnabled:   true
	I1009 19:04:56.817535  161014 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 19:04:56.819047  161014 ssh_runner.go:195] Run: crio --version
	I1009 19:04:56.846110  161014 command_runner.go:130] > crio version 1.34.1
	I1009 19:04:56.846137  161014 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 19:04:56.846145  161014 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 19:04:56.846154  161014 command_runner.go:130] >    GitTreeState:   dirty
	I1009 19:04:56.846160  161014 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 19:04:56.846166  161014 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 19:04:56.846172  161014 command_runner.go:130] >    Compiler:       gc
	I1009 19:04:56.846179  161014 command_runner.go:130] >    Platform:       linux/amd64
	I1009 19:04:56.846185  161014 command_runner.go:130] >    Linkmode:       static
	I1009 19:04:56.846193  161014 command_runner.go:130] >    BuildTags:
	I1009 19:04:56.846202  161014 command_runner.go:130] >      static
	I1009 19:04:56.846209  161014 command_runner.go:130] >      netgo
	I1009 19:04:56.846218  161014 command_runner.go:130] >      osusergo
	I1009 19:04:56.846226  161014 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 19:04:56.846238  161014 command_runner.go:130] >      seccomp
	I1009 19:04:56.846246  161014 command_runner.go:130] >      apparmor
	I1009 19:04:56.846252  161014 command_runner.go:130] >      selinux
	I1009 19:04:56.846262  161014 command_runner.go:130] >    LDFlags:          unknown
	I1009 19:04:56.846270  161014 command_runner.go:130] >    SeccompEnabled:   true
	I1009 19:04:56.846280  161014 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 19:04:56.849910  161014 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:04:56.851471  161014 cli_runner.go:164] Run: docker network inspect functional-158523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:04:56.867982  161014 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:04:56.872517  161014 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1009 19:04:56.872627  161014 kubeadm.go:883] updating cluster {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:04:56.872731  161014 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:04:56.872790  161014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:04:56.904568  161014 command_runner.go:130] > {
	I1009 19:04:56.904591  161014 command_runner.go:130] >   "images":  [
	I1009 19:04:56.904595  161014 command_runner.go:130] >     {
	I1009 19:04:56.904603  161014 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 19:04:56.904608  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904617  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 19:04:56.904622  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904628  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.904652  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 19:04:56.904667  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 19:04:56.904673  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904681  161014 command_runner.go:130] >       "size":  "109379124",
	I1009 19:04:56.904688  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.904694  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.904700  161014 command_runner.go:130] >     },
	I1009 19:04:56.904706  161014 command_runner.go:130] >     {
	I1009 19:04:56.904719  161014 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 19:04:56.904728  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904736  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 19:04:56.904744  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904754  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.904771  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 19:04:56.904786  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 19:04:56.904794  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904799  161014 command_runner.go:130] >       "size":  "31470524",
	I1009 19:04:56.904805  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.904814  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.904822  161014 command_runner.go:130] >     },
	I1009 19:04:56.904831  161014 command_runner.go:130] >     {
	I1009 19:04:56.904841  161014 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 19:04:56.904851  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904861  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 19:04:56.904870  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904879  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.904890  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 19:04:56.904903  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 19:04:56.904912  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904919  161014 command_runner.go:130] >       "size":  "76103547",
	I1009 19:04:56.904928  161014 command_runner.go:130] >       "username":  "nonroot",
	I1009 19:04:56.904938  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.904946  161014 command_runner.go:130] >     },
	I1009 19:04:56.904951  161014 command_runner.go:130] >     {
	I1009 19:04:56.904963  161014 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 19:04:56.904972  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904982  161014 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 19:04:56.904988  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904994  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905015  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 19:04:56.905029  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 19:04:56.905038  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905048  161014 command_runner.go:130] >       "size":  "195976448",
	I1009 19:04:56.905056  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905062  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905071  161014 command_runner.go:130] >       },
	I1009 19:04:56.905082  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905092  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905096  161014 command_runner.go:130] >     },
	I1009 19:04:56.905099  161014 command_runner.go:130] >     {
	I1009 19:04:56.905111  161014 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 19:04:56.905120  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905128  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 19:04:56.905137  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905147  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905160  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 19:04:56.905174  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 19:04:56.905182  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905188  161014 command_runner.go:130] >       "size":  "89046001",
	I1009 19:04:56.905195  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905199  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905207  161014 command_runner.go:130] >       },
	I1009 19:04:56.905218  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905228  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905235  161014 command_runner.go:130] >     },
	I1009 19:04:56.905240  161014 command_runner.go:130] >     {
	I1009 19:04:56.905253  161014 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 19:04:56.905262  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905273  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 19:04:56.905280  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905284  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905299  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 19:04:56.905315  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 19:04:56.905324  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905333  161014 command_runner.go:130] >       "size":  "76004181",
	I1009 19:04:56.905342  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905352  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905360  161014 command_runner.go:130] >       },
	I1009 19:04:56.905367  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905393  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905402  161014 command_runner.go:130] >     },
	I1009 19:04:56.905407  161014 command_runner.go:130] >     {
	I1009 19:04:56.905417  161014 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 19:04:56.905427  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905438  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 19:04:56.905446  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905456  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905470  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 19:04:56.905482  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 19:04:56.905490  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905500  161014 command_runner.go:130] >       "size":  "73138073",
	I1009 19:04:56.905510  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905516  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905525  161014 command_runner.go:130] >     },
	I1009 19:04:56.905533  161014 command_runner.go:130] >     {
	I1009 19:04:56.905543  161014 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 19:04:56.905552  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905563  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 19:04:56.905571  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905579  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905590  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 19:04:56.905613  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 19:04:56.905622  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905629  161014 command_runner.go:130] >       "size":  "53844823",
	I1009 19:04:56.905637  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905647  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905655  161014 command_runner.go:130] >       },
	I1009 19:04:56.905664  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905673  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905681  161014 command_runner.go:130] >     },
	I1009 19:04:56.905690  161014 command_runner.go:130] >     {
	I1009 19:04:56.905696  161014 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 19:04:56.905705  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905712  161014 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 19:04:56.905721  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905727  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905740  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 19:04:56.905754  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 19:04:56.905762  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905772  161014 command_runner.go:130] >       "size":  "742092",
	I1009 19:04:56.905783  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905791  161014 command_runner.go:130] >         "value":  "65535"
	I1009 19:04:56.905795  161014 command_runner.go:130] >       },
	I1009 19:04:56.905802  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905808  161014 command_runner.go:130] >       "pinned":  true
	I1009 19:04:56.905816  161014 command_runner.go:130] >     }
	I1009 19:04:56.905822  161014 command_runner.go:130] >   ]
	I1009 19:04:56.905830  161014 command_runner.go:130] > }
	I1009 19:04:56.906014  161014 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:04:56.906027  161014 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:04:56.906079  161014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:04:56.933720  161014 command_runner.go:130] > {
	I1009 19:04:56.933747  161014 command_runner.go:130] >   "images":  [
	I1009 19:04:56.933753  161014 command_runner.go:130] >     {
	I1009 19:04:56.933769  161014 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 19:04:56.933774  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.933781  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 19:04:56.933788  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933794  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.933805  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 19:04:56.933821  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 19:04:56.933827  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933835  161014 command_runner.go:130] >       "size":  "109379124",
	I1009 19:04:56.933845  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.933855  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.933861  161014 command_runner.go:130] >     },
	I1009 19:04:56.933864  161014 command_runner.go:130] >     {
	I1009 19:04:56.933873  161014 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 19:04:56.933879  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.933890  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 19:04:56.933899  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933906  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.933921  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 19:04:56.933935  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 19:04:56.933944  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933951  161014 command_runner.go:130] >       "size":  "31470524",
	I1009 19:04:56.933960  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.933970  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.933975  161014 command_runner.go:130] >     },
	I1009 19:04:56.933979  161014 command_runner.go:130] >     {
	I1009 19:04:56.933992  161014 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 19:04:56.934002  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934016  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 19:04:56.934029  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934036  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934050  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 19:04:56.934065  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 19:04:56.934072  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934079  161014 command_runner.go:130] >       "size":  "76103547",
	I1009 19:04:56.934086  161014 command_runner.go:130] >       "username":  "nonroot",
	I1009 19:04:56.934090  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934097  161014 command_runner.go:130] >     },
	I1009 19:04:56.934102  161014 command_runner.go:130] >     {
	I1009 19:04:56.934116  161014 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 19:04:56.934126  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934137  161014 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 19:04:56.934145  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934151  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934164  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 19:04:56.934177  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 19:04:56.934183  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934188  161014 command_runner.go:130] >       "size":  "195976448",
	I1009 19:04:56.934197  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934207  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934216  161014 command_runner.go:130] >       },
	I1009 19:04:56.934263  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934275  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934279  161014 command_runner.go:130] >     },
	I1009 19:04:56.934283  161014 command_runner.go:130] >     {
	I1009 19:04:56.934296  161014 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 19:04:56.934306  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934315  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 19:04:56.934323  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934329  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934344  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 19:04:56.934358  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 19:04:56.934372  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934397  161014 command_runner.go:130] >       "size":  "89046001",
	I1009 19:04:56.934408  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934416  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934425  161014 command_runner.go:130] >       },
	I1009 19:04:56.934435  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934444  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934452  161014 command_runner.go:130] >     },
	I1009 19:04:56.934461  161014 command_runner.go:130] >     {
	I1009 19:04:56.934473  161014 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 19:04:56.934480  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934486  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 19:04:56.934493  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934499  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934514  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 19:04:56.934529  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 19:04:56.934538  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934545  161014 command_runner.go:130] >       "size":  "76004181",
	I1009 19:04:56.934554  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934560  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934566  161014 command_runner.go:130] >       },
	I1009 19:04:56.934572  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934578  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934581  161014 command_runner.go:130] >     },
	I1009 19:04:56.934584  161014 command_runner.go:130] >     {
	I1009 19:04:56.934592  161014 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 19:04:56.934597  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934605  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 19:04:56.934610  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934616  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934629  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 19:04:56.934643  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 19:04:56.934652  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934660  161014 command_runner.go:130] >       "size":  "73138073",
	I1009 19:04:56.934667  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934677  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934681  161014 command_runner.go:130] >     },
	I1009 19:04:56.934684  161014 command_runner.go:130] >     {
	I1009 19:04:56.934690  161014 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 19:04:56.934696  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934704  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 19:04:56.934709  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934716  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934726  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 19:04:56.934747  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 19:04:56.934753  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934772  161014 command_runner.go:130] >       "size":  "53844823",
	I1009 19:04:56.934779  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934786  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934795  161014 command_runner.go:130] >       },
	I1009 19:04:56.934801  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934811  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934816  161014 command_runner.go:130] >     },
	I1009 19:04:56.934824  161014 command_runner.go:130] >     {
	I1009 19:04:56.934834  161014 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 19:04:56.934843  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934850  161014 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 19:04:56.934858  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934862  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934871  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 19:04:56.934886  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 19:04:56.934895  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934902  161014 command_runner.go:130] >       "size":  "742092",
	I1009 19:04:56.934910  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934917  161014 command_runner.go:130] >         "value":  "65535"
	I1009 19:04:56.934926  161014 command_runner.go:130] >       },
	I1009 19:04:56.934934  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934943  161014 command_runner.go:130] >       "pinned":  true
	I1009 19:04:56.934947  161014 command_runner.go:130] >     }
	I1009 19:04:56.934950  161014 command_runner.go:130] >   ]
	I1009 19:04:56.934953  161014 command_runner.go:130] > }
	I1009 19:04:56.935095  161014 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:04:56.935110  161014 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:04:56.935118  161014 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 19:04:56.935242  161014 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-158523 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:04:56.935323  161014 ssh_runner.go:195] Run: crio config
	I1009 19:04:56.978304  161014 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1009 19:04:56.978336  161014 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1009 19:04:56.978345  161014 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1009 19:04:56.978350  161014 command_runner.go:130] > #
	I1009 19:04:56.978359  161014 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1009 19:04:56.978367  161014 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1009 19:04:56.978390  161014 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1009 19:04:56.978401  161014 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1009 19:04:56.978406  161014 command_runner.go:130] > # reload'.
	I1009 19:04:56.978415  161014 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1009 19:04:56.978436  161014 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1009 19:04:56.978448  161014 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1009 19:04:56.978458  161014 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1009 19:04:56.978464  161014 command_runner.go:130] > [crio]
	I1009 19:04:56.978476  161014 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1009 19:04:56.978484  161014 command_runner.go:130] > # containers images, in this directory.
	I1009 19:04:56.978495  161014 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1009 19:04:56.978505  161014 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1009 19:04:56.978514  161014 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1009 19:04:56.978523  161014 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1009 19:04:56.978532  161014 command_runner.go:130] > # imagestore = ""
	I1009 19:04:56.978541  161014 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1009 19:04:56.978554  161014 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1009 19:04:56.978561  161014 command_runner.go:130] > # storage_driver = "overlay"
	I1009 19:04:56.978571  161014 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1009 19:04:56.978581  161014 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1009 19:04:56.978591  161014 command_runner.go:130] > # storage_option = [
	I1009 19:04:56.978596  161014 command_runner.go:130] > # ]
	I1009 19:04:56.978605  161014 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1009 19:04:56.978616  161014 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1009 19:04:56.978623  161014 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1009 19:04:56.978631  161014 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1009 19:04:56.978640  161014 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1009 19:04:56.978647  161014 command_runner.go:130] > # always happen on a node reboot
	I1009 19:04:56.978654  161014 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1009 19:04:56.978669  161014 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1009 19:04:56.978682  161014 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1009 19:04:56.978689  161014 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1009 19:04:56.978695  161014 command_runner.go:130] > # version_file_persist = ""
	I1009 19:04:56.978714  161014 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1009 19:04:56.978728  161014 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1009 19:04:56.978737  161014 command_runner.go:130] > # internal_wipe = true
	I1009 19:04:56.978748  161014 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1009 19:04:56.978760  161014 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1009 19:04:56.978772  161014 command_runner.go:130] > # internal_repair = true
	I1009 19:04:56.978780  161014 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1009 19:04:56.978794  161014 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1009 19:04:56.978805  161014 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1009 19:04:56.978815  161014 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1009 19:04:56.978825  161014 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1009 19:04:56.978833  161014 command_runner.go:130] > [crio.api]
	I1009 19:04:56.978841  161014 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1009 19:04:56.978851  161014 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1009 19:04:56.978860  161014 command_runner.go:130] > # IP address on which the stream server will listen.
	I1009 19:04:56.978870  161014 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1009 19:04:56.978881  161014 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1009 19:04:56.978892  161014 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1009 19:04:56.978901  161014 command_runner.go:130] > # stream_port = "0"
	I1009 19:04:56.978910  161014 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1009 19:04:56.978920  161014 command_runner.go:130] > # stream_enable_tls = false
	I1009 19:04:56.978929  161014 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1009 19:04:56.978954  161014 command_runner.go:130] > # stream_idle_timeout = ""
	I1009 19:04:56.978969  161014 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1009 19:04:56.978978  161014 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1009 19:04:56.978985  161014 command_runner.go:130] > # stream_tls_cert = ""
	I1009 19:04:56.978999  161014 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1009 19:04:56.979007  161014 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1009 19:04:56.979013  161014 command_runner.go:130] > # stream_tls_key = ""
	I1009 19:04:56.979025  161014 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1009 19:04:56.979039  161014 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1009 19:04:56.979049  161014 command_runner.go:130] > # automatically pick up the changes.
	I1009 19:04:56.979058  161014 command_runner.go:130] > # stream_tls_ca = ""
	I1009 19:04:56.979084  161014 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 19:04:56.979098  161014 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1009 19:04:56.979110  161014 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 19:04:56.979117  161014 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1009 19:04:56.979127  161014 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1009 19:04:56.979134  161014 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1009 19:04:56.979139  161014 command_runner.go:130] > [crio.runtime]
	I1009 19:04:56.979146  161014 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1009 19:04:56.979155  161014 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1009 19:04:56.979163  161014 command_runner.go:130] > # "nofile=1024:2048"
	I1009 19:04:56.979177  161014 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1009 19:04:56.979187  161014 command_runner.go:130] > # default_ulimits = [
	I1009 19:04:56.979193  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979206  161014 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1009 19:04:56.979215  161014 command_runner.go:130] > # no_pivot = false
	I1009 19:04:56.979226  161014 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1009 19:04:56.979239  161014 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1009 19:04:56.979251  161014 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1009 19:04:56.979259  161014 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1009 19:04:56.979267  161014 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1009 19:04:56.979277  161014 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 19:04:56.979283  161014 command_runner.go:130] > # conmon = ""
	I1009 19:04:56.979290  161014 command_runner.go:130] > # Cgroup setting for conmon
	I1009 19:04:56.979301  161014 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1009 19:04:56.979311  161014 command_runner.go:130] > conmon_cgroup = "pod"
	I1009 19:04:56.979320  161014 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1009 19:04:56.979327  161014 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1009 19:04:56.979338  161014 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 19:04:56.979347  161014 command_runner.go:130] > # conmon_env = [
	I1009 19:04:56.979353  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979364  161014 command_runner.go:130] > # Additional environment variables to set for all the
	I1009 19:04:56.979392  161014 command_runner.go:130] > # containers. These are overridden if set in the
	I1009 19:04:56.979406  161014 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1009 19:04:56.979412  161014 command_runner.go:130] > # default_env = [
	I1009 19:04:56.979420  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979429  161014 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1009 19:04:56.979443  161014 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1009 19:04:56.979453  161014 command_runner.go:130] > # selinux = false
	I1009 19:04:56.979463  161014 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1009 19:04:56.979479  161014 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1009 19:04:56.979489  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979497  161014 command_runner.go:130] > # seccomp_profile = ""
	I1009 19:04:56.979509  161014 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1009 19:04:56.979522  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979529  161014 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1009 19:04:56.979542  161014 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1009 19:04:56.979555  161014 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1009 19:04:56.979564  161014 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1009 19:04:56.979574  161014 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1009 19:04:56.979585  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979593  161014 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1009 19:04:56.979605  161014 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1009 19:04:56.979615  161014 command_runner.go:130] > # the cgroup blockio controller.
	I1009 19:04:56.979622  161014 command_runner.go:130] > # blockio_config_file = ""
	I1009 19:04:56.979636  161014 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1009 19:04:56.979642  161014 command_runner.go:130] > # blockio parameters.
	I1009 19:04:56.979648  161014 command_runner.go:130] > # blockio_reload = false
	I1009 19:04:56.979658  161014 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1009 19:04:56.979664  161014 command_runner.go:130] > # irqbalance daemon.
	I1009 19:04:56.979672  161014 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1009 19:04:56.979681  161014 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1009 19:04:56.979690  161014 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1009 19:04:56.979700  161014 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1009 19:04:56.979710  161014 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1009 19:04:56.979724  161014 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1009 19:04:56.979731  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979741  161014 command_runner.go:130] > # rdt_config_file = ""
	I1009 19:04:56.979753  161014 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1009 19:04:56.979764  161014 command_runner.go:130] > # cgroup_manager = "systemd"
	I1009 19:04:56.979773  161014 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1009 19:04:56.979783  161014 command_runner.go:130] > # separate_pull_cgroup = ""
	I1009 19:04:56.979791  161014 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1009 19:04:56.979800  161014 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1009 19:04:56.979809  161014 command_runner.go:130] > # will be added.
	I1009 19:04:56.979817  161014 command_runner.go:130] > # default_capabilities = [
	I1009 19:04:56.979826  161014 command_runner.go:130] > # 	"CHOWN",
	I1009 19:04:56.979832  161014 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1009 19:04:56.979840  161014 command_runner.go:130] > # 	"FSETID",
	I1009 19:04:56.979846  161014 command_runner.go:130] > # 	"FOWNER",
	I1009 19:04:56.979855  161014 command_runner.go:130] > # 	"SETGID",
	I1009 19:04:56.979876  161014 command_runner.go:130] > # 	"SETUID",
	I1009 19:04:56.979885  161014 command_runner.go:130] > # 	"SETPCAP",
	I1009 19:04:56.979891  161014 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1009 19:04:56.979901  161014 command_runner.go:130] > # 	"KILL",
	I1009 19:04:56.979906  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979920  161014 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1009 19:04:56.979930  161014 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1009 19:04:56.979950  161014 command_runner.go:130] > # add_inheritable_capabilities = false
	I1009 19:04:56.979963  161014 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1009 19:04:56.979972  161014 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 19:04:56.979977  161014 command_runner.go:130] > default_sysctls = [
	I1009 19:04:56.979993  161014 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1009 19:04:56.979997  161014 command_runner.go:130] > ]
	I1009 19:04:56.980003  161014 command_runner.go:130] > # List of devices on the host that a
	I1009 19:04:56.980010  161014 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1009 19:04:56.980015  161014 command_runner.go:130] > # allowed_devices = [
	I1009 19:04:56.980019  161014 command_runner.go:130] > # 	"/dev/fuse",
	I1009 19:04:56.980024  161014 command_runner.go:130] > # 	"/dev/net/tun",
	I1009 19:04:56.980029  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980035  161014 command_runner.go:130] > # List of additional devices. specified as
	I1009 19:04:56.980047  161014 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1009 19:04:56.980055  161014 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1009 19:04:56.980063  161014 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 19:04:56.980069  161014 command_runner.go:130] > # additional_devices = [
	I1009 19:04:56.980072  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980079  161014 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1009 19:04:56.980084  161014 command_runner.go:130] > # cdi_spec_dirs = [
	I1009 19:04:56.980091  161014 command_runner.go:130] > # 	"/etc/cdi",
	I1009 19:04:56.980097  161014 command_runner.go:130] > # 	"/var/run/cdi",
	I1009 19:04:56.980101  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980111  161014 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1009 19:04:56.980120  161014 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1009 19:04:56.980126  161014 command_runner.go:130] > # Defaults to false.
	I1009 19:04:56.980133  161014 command_runner.go:130] > # device_ownership_from_security_context = false
	I1009 19:04:56.980146  161014 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1009 19:04:56.980157  161014 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1009 19:04:56.980163  161014 command_runner.go:130] > # hooks_dir = [
	I1009 19:04:56.980167  161014 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1009 19:04:56.980173  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980179  161014 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1009 19:04:56.980187  161014 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1009 19:04:56.980192  161014 command_runner.go:130] > # its default mounts from the following two files:
	I1009 19:04:56.980197  161014 command_runner.go:130] > #
	I1009 19:04:56.980202  161014 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1009 19:04:56.980211  161014 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1009 19:04:56.980218  161014 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1009 19:04:56.980221  161014 command_runner.go:130] > #
	I1009 19:04:56.980230  161014 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1009 19:04:56.980236  161014 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1009 19:04:56.980244  161014 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1009 19:04:56.980252  161014 command_runner.go:130] > #      only add mounts it finds in this file.
	I1009 19:04:56.980255  161014 command_runner.go:130] > #
	I1009 19:04:56.980261  161014 command_runner.go:130] > # default_mounts_file = ""
	I1009 19:04:56.980266  161014 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1009 19:04:56.980275  161014 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1009 19:04:56.980281  161014 command_runner.go:130] > # pids_limit = -1
	I1009 19:04:56.980286  161014 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1009 19:04:56.980294  161014 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1009 19:04:56.980300  161014 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1009 19:04:56.980309  161014 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1009 19:04:56.980315  161014 command_runner.go:130] > # log_size_max = -1
	I1009 19:04:56.980322  161014 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1009 19:04:56.980328  161014 command_runner.go:130] > # log_to_journald = false
	I1009 19:04:56.980335  161014 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1009 19:04:56.980341  161014 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1009 19:04:56.980345  161014 command_runner.go:130] > # Path to directory for container attach sockets.
	I1009 19:04:56.980352  161014 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1009 19:04:56.980357  161014 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1009 19:04:56.980365  161014 command_runner.go:130] > # bind_mount_prefix = ""
	I1009 19:04:56.980370  161014 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1009 19:04:56.980376  161014 command_runner.go:130] > # read_only = false
	I1009 19:04:56.980395  161014 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1009 19:04:56.980405  161014 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1009 19:04:56.980413  161014 command_runner.go:130] > # live configuration reload.
	I1009 19:04:56.980417  161014 command_runner.go:130] > # log_level = "info"
	I1009 19:04:56.980425  161014 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1009 19:04:56.980430  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.980435  161014 command_runner.go:130] > # log_filter = ""
	I1009 19:04:56.980441  161014 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1009 19:04:56.980449  161014 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1009 19:04:56.980455  161014 command_runner.go:130] > # separated by comma.
	I1009 19:04:56.980462  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980467  161014 command_runner.go:130] > # uid_mappings = ""
	I1009 19:04:56.980473  161014 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1009 19:04:56.980480  161014 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1009 19:04:56.980486  161014 command_runner.go:130] > # separated by comma.
	I1009 19:04:56.980496  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980502  161014 command_runner.go:130] > # gid_mappings = ""
	I1009 19:04:56.980508  161014 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1009 19:04:56.980516  161014 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 19:04:56.980524  161014 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 19:04:56.980534  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980540  161014 command_runner.go:130] > # minimum_mappable_uid = -1
	I1009 19:04:56.980547  161014 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1009 19:04:56.980556  161014 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 19:04:56.980562  161014 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 19:04:56.980569  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980575  161014 command_runner.go:130] > # minimum_mappable_gid = -1
	I1009 19:04:56.980581  161014 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1009 19:04:56.980588  161014 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1009 19:04:56.980593  161014 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1009 19:04:56.980599  161014 command_runner.go:130] > # ctr_stop_timeout = 30
	I1009 19:04:56.980605  161014 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1009 19:04:56.980612  161014 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1009 19:04:56.980616  161014 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1009 19:04:56.980623  161014 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1009 19:04:56.980627  161014 command_runner.go:130] > # drop_infra_ctr = true
	I1009 19:04:56.980635  161014 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1009 19:04:56.980640  161014 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1009 19:04:56.980649  161014 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1009 19:04:56.980657  161014 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1009 19:04:56.980666  161014 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1009 19:04:56.980674  161014 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1009 19:04:56.980682  161014 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1009 19:04:56.980687  161014 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1009 19:04:56.980695  161014 command_runner.go:130] > # shared_cpuset = ""
	I1009 19:04:56.980703  161014 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1009 19:04:56.980707  161014 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1009 19:04:56.980712  161014 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1009 19:04:56.980719  161014 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1009 19:04:56.980725  161014 command_runner.go:130] > # pinns_path = ""
	I1009 19:04:56.980730  161014 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1009 19:04:56.980738  161014 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1009 19:04:56.980742  161014 command_runner.go:130] > # enable_criu_support = true
	I1009 19:04:56.980749  161014 command_runner.go:130] > # Enable/disable the generation of the container,
	I1009 19:04:56.980754  161014 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1009 19:04:56.980761  161014 command_runner.go:130] > # enable_pod_events = false
	I1009 19:04:56.980767  161014 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 19:04:56.980775  161014 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1009 19:04:56.980779  161014 command_runner.go:130] > # default_runtime = "crun"
	I1009 19:04:56.980785  161014 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1009 19:04:56.980792  161014 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1009 19:04:56.980803  161014 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1009 19:04:56.980809  161014 command_runner.go:130] > # creation as a file is not desired either.
	I1009 19:04:56.980817  161014 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1009 19:04:56.980823  161014 command_runner.go:130] > # the hostname is being managed dynamically.
	I1009 19:04:56.980828  161014 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1009 19:04:56.980831  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980836  161014 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1009 19:04:56.980844  161014 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1009 19:04:56.980850  161014 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1009 19:04:56.980858  161014 command_runner.go:130] > # Each entry in the table should follow the format:
	I1009 19:04:56.980861  161014 command_runner.go:130] > #
	I1009 19:04:56.980865  161014 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1009 19:04:56.980872  161014 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1009 19:04:56.980875  161014 command_runner.go:130] > # runtime_type = "oci"
	I1009 19:04:56.980882  161014 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1009 19:04:56.980887  161014 command_runner.go:130] > # inherit_default_runtime = false
	I1009 19:04:56.980894  161014 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1009 19:04:56.980898  161014 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1009 19:04:56.980902  161014 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1009 19:04:56.980906  161014 command_runner.go:130] > # monitor_env = []
	I1009 19:04:56.980910  161014 command_runner.go:130] > # privileged_without_host_devices = false
	I1009 19:04:56.980917  161014 command_runner.go:130] > # allowed_annotations = []
	I1009 19:04:56.980922  161014 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1009 19:04:56.980928  161014 command_runner.go:130] > # no_sync_log = false
	I1009 19:04:56.980932  161014 command_runner.go:130] > # default_annotations = {}
	I1009 19:04:56.980939  161014 command_runner.go:130] > # stream_websockets = false
	I1009 19:04:56.980949  161014 command_runner.go:130] > # seccomp_profile = ""
	I1009 19:04:56.980985  161014 command_runner.go:130] > # Where:
	I1009 19:04:56.980994  161014 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1009 19:04:56.980999  161014 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1009 19:04:56.981005  161014 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1009 19:04:56.981010  161014 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1009 19:04:56.981014  161014 command_runner.go:130] > #   in $PATH.
	I1009 19:04:56.981020  161014 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1009 19:04:56.981024  161014 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1009 19:04:56.981032  161014 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1009 19:04:56.981035  161014 command_runner.go:130] > #   state.
	I1009 19:04:56.981041  161014 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1009 19:04:56.981049  161014 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1009 19:04:56.981054  161014 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1009 19:04:56.981063  161014 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1009 19:04:56.981067  161014 command_runner.go:130] > #   the values from the default runtime on load time.
	I1009 19:04:56.981078  161014 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1009 19:04:56.981086  161014 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1009 19:04:56.981092  161014 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1009 19:04:56.981100  161014 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1009 19:04:56.981105  161014 command_runner.go:130] > #   The currently recognized values are:
	I1009 19:04:56.981113  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1009 19:04:56.981123  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1009 19:04:56.981130  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1009 19:04:56.981135  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1009 19:04:56.981144  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1009 19:04:56.981153  161014 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1009 19:04:56.981161  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1009 19:04:56.981169  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1009 19:04:56.981177  161014 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1009 19:04:56.981183  161014 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1009 19:04:56.981191  161014 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1009 19:04:56.981199  161014 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1009 19:04:56.981204  161014 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1009 19:04:56.981213  161014 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1009 19:04:56.981221  161014 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1009 19:04:56.981227  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1009 19:04:56.981235  161014 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1009 19:04:56.981239  161014 command_runner.go:130] > #   deprecated option "conmon".
	I1009 19:04:56.981248  161014 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1009 19:04:56.981255  161014 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1009 19:04:56.981261  161014 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1009 19:04:56.981268  161014 command_runner.go:130] > #   should be moved to the container's cgroup
	I1009 19:04:56.981273  161014 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1009 19:04:56.981280  161014 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1009 19:04:56.981287  161014 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1009 19:04:56.981293  161014 command_runner.go:130] > #   conmon-rs by using:
	I1009 19:04:56.981300  161014 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1009 19:04:56.981309  161014 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1009 19:04:56.981318  161014 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1009 19:04:56.981326  161014 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1009 19:04:56.981334  161014 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1009 19:04:56.981341  161014 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1009 19:04:56.981351  161014 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1009 19:04:56.981359  161014 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1009 19:04:56.981370  161014 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1009 19:04:56.981395  161014 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1009 19:04:56.981405  161014 command_runner.go:130] > #   when a machine crash happens.
	I1009 19:04:56.981411  161014 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1009 19:04:56.981421  161014 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1009 19:04:56.981431  161014 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1009 19:04:56.981437  161014 command_runner.go:130] > #   seccomp profile for the runtime.
	I1009 19:04:56.981443  161014 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1009 19:04:56.981452  161014 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1009 19:04:56.981455  161014 command_runner.go:130] > #
	I1009 19:04:56.981460  161014 command_runner.go:130] > # Using the seccomp notifier feature:
	I1009 19:04:56.981465  161014 command_runner.go:130] > #
	I1009 19:04:56.981472  161014 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1009 19:04:56.981480  161014 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1009 19:04:56.981483  161014 command_runner.go:130] > #
	I1009 19:04:56.981490  161014 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1009 19:04:56.981498  161014 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1009 19:04:56.981501  161014 command_runner.go:130] > #
	I1009 19:04:56.981507  161014 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1009 19:04:56.981512  161014 command_runner.go:130] > # feature.
	I1009 19:04:56.981515  161014 command_runner.go:130] > #
	I1009 19:04:56.981537  161014 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1009 19:04:56.981545  161014 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1009 19:04:56.981553  161014 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1009 19:04:56.981562  161014 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1009 19:04:56.981568  161014 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1009 19:04:56.981573  161014 command_runner.go:130] > #
	I1009 19:04:56.981579  161014 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1009 19:04:56.981587  161014 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1009 19:04:56.981590  161014 command_runner.go:130] > #
	I1009 19:04:56.981598  161014 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1009 19:04:56.981603  161014 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1009 19:04:56.981608  161014 command_runner.go:130] > #
	I1009 19:04:56.981614  161014 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1009 19:04:56.981622  161014 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1009 19:04:56.981628  161014 command_runner.go:130] > # limitation.
	I1009 19:04:56.981632  161014 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1009 19:04:56.981639  161014 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1009 19:04:56.981642  161014 command_runner.go:130] > runtime_type = ""
	I1009 19:04:56.981648  161014 command_runner.go:130] > runtime_root = "/run/crun"
	I1009 19:04:56.981652  161014 command_runner.go:130] > inherit_default_runtime = false
	I1009 19:04:56.981657  161014 command_runner.go:130] > runtime_config_path = ""
	I1009 19:04:56.981663  161014 command_runner.go:130] > container_min_memory = ""
	I1009 19:04:56.981667  161014 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 19:04:56.981673  161014 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 19:04:56.981677  161014 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 19:04:56.981683  161014 command_runner.go:130] > allowed_annotations = [
	I1009 19:04:56.981687  161014 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1009 19:04:56.981694  161014 command_runner.go:130] > ]
	I1009 19:04:56.981699  161014 command_runner.go:130] > privileged_without_host_devices = false
	I1009 19:04:56.981705  161014 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1009 19:04:56.981709  161014 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1009 19:04:56.981715  161014 command_runner.go:130] > runtime_type = ""
	I1009 19:04:56.981719  161014 command_runner.go:130] > runtime_root = "/run/runc"
	I1009 19:04:56.981725  161014 command_runner.go:130] > inherit_default_runtime = false
	I1009 19:04:56.981729  161014 command_runner.go:130] > runtime_config_path = ""
	I1009 19:04:56.981735  161014 command_runner.go:130] > container_min_memory = ""
	I1009 19:04:56.981739  161014 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 19:04:56.981744  161014 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 19:04:56.981750  161014 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 19:04:56.981754  161014 command_runner.go:130] > privileged_without_host_devices = false
	I1009 19:04:56.981761  161014 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1009 19:04:56.981769  161014 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1009 19:04:56.981774  161014 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1009 19:04:56.981783  161014 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1009 19:04:56.981795  161014 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1009 19:04:56.981807  161014 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1009 19:04:56.981815  161014 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1009 19:04:56.981823  161014 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1009 19:04:56.981831  161014 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1009 19:04:56.981840  161014 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1009 19:04:56.981848  161014 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1009 19:04:56.981854  161014 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1009 19:04:56.981859  161014 command_runner.go:130] > # Example:
	I1009 19:04:56.981864  161014 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1009 19:04:56.981871  161014 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1009 19:04:56.981875  161014 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1009 19:04:56.981884  161014 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1009 19:04:56.981899  161014 command_runner.go:130] > # cpuset = "0-1"
	I1009 19:04:56.981905  161014 command_runner.go:130] > # cpushares = "5"
	I1009 19:04:56.981909  161014 command_runner.go:130] > # cpuquota = "1000"
	I1009 19:04:56.981912  161014 command_runner.go:130] > # cpuperiod = "100000"
	I1009 19:04:56.981920  161014 command_runner.go:130] > # cpulimit = "35"
	I1009 19:04:56.981926  161014 command_runner.go:130] > # Where:
	I1009 19:04:56.981936  161014 command_runner.go:130] > # The workload name is workload-type.
	I1009 19:04:56.981948  161014 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1009 19:04:56.981955  161014 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1009 19:04:56.981962  161014 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1009 19:04:56.981971  161014 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1009 19:04:56.981979  161014 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1009 19:04:56.981984  161014 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1009 19:04:56.981993  161014 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1009 19:04:56.981997  161014 command_runner.go:130] > # Default value is set to true
	I1009 19:04:56.982003  161014 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1009 19:04:56.982009  161014 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1009 19:04:56.982013  161014 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1009 19:04:56.982017  161014 command_runner.go:130] > # Default value is set to 'false'
	I1009 19:04:56.982020  161014 command_runner.go:130] > # disable_hostport_mapping = false
	I1009 19:04:56.982025  161014 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1009 19:04:56.982034  161014 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1009 19:04:56.982039  161014 command_runner.go:130] > # timezone = ""
	I1009 19:04:56.982045  161014 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1009 19:04:56.982050  161014 command_runner.go:130] > #
	I1009 19:04:56.982056  161014 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1009 19:04:56.982064  161014 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1009 19:04:56.982067  161014 command_runner.go:130] > [crio.image]
	I1009 19:04:56.982072  161014 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1009 19:04:56.982080  161014 command_runner.go:130] > # default_transport = "docker://"
	I1009 19:04:56.982085  161014 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1009 19:04:56.982093  161014 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1009 19:04:56.982100  161014 command_runner.go:130] > # global_auth_file = ""
	I1009 19:04:56.982105  161014 command_runner.go:130] > # The image used to instantiate infra containers.
	I1009 19:04:56.982112  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.982116  161014 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1009 19:04:56.982124  161014 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1009 19:04:56.982132  161014 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1009 19:04:56.982137  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.982143  161014 command_runner.go:130] > # pause_image_auth_file = ""
	I1009 19:04:56.982148  161014 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1009 19:04:56.982156  161014 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1009 19:04:56.982162  161014 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1009 19:04:56.982170  161014 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1009 19:04:56.982173  161014 command_runner.go:130] > # pause_command = "/pause"
	I1009 19:04:56.982178  161014 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1009 19:04:56.982186  161014 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1009 19:04:56.982191  161014 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1009 19:04:56.982199  161014 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1009 19:04:56.982204  161014 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1009 19:04:56.982213  161014 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1009 19:04:56.982219  161014 command_runner.go:130] > # pinned_images = [
	I1009 19:04:56.982222  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982227  161014 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1009 19:04:56.982235  161014 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1009 19:04:56.982241  161014 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1009 19:04:56.982248  161014 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1009 19:04:56.982253  161014 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1009 19:04:56.982260  161014 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1009 19:04:56.982265  161014 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1009 19:04:56.982274  161014 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1009 19:04:56.982282  161014 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1009 19:04:56.982287  161014 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1009 19:04:56.982295  161014 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1009 19:04:56.982302  161014 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1009 19:04:56.982307  161014 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1009 19:04:56.982316  161014 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1009 19:04:56.982322  161014 command_runner.go:130] > # changing them here.
	I1009 19:04:56.982327  161014 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1009 19:04:56.982333  161014 command_runner.go:130] > # insecure_registries = [
	I1009 19:04:56.982336  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982342  161014 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1009 19:04:56.982352  161014 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1009 19:04:56.982359  161014 command_runner.go:130] > # image_volumes = "mkdir"
	I1009 19:04:56.982364  161014 command_runner.go:130] > # Temporary directory to use for storing big files
	I1009 19:04:56.982370  161014 command_runner.go:130] > # big_files_temporary_dir = ""
	I1009 19:04:56.982385  161014 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1009 19:04:56.982398  161014 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1009 19:04:56.982403  161014 command_runner.go:130] > # auto_reload_registries = false
	I1009 19:04:56.982412  161014 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1009 19:04:56.982419  161014 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1009 19:04:56.982427  161014 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1009 19:04:56.982431  161014 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1009 19:04:56.982435  161014 command_runner.go:130] > # The mode of short name resolution.
	I1009 19:04:56.982441  161014 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1009 19:04:56.982450  161014 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1009 19:04:56.982455  161014 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1009 19:04:56.982460  161014 command_runner.go:130] > # short_name_mode = "enforcing"
	I1009 19:04:56.982465  161014 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1009 19:04:56.982472  161014 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1009 19:04:56.982476  161014 command_runner.go:130] > # oci_artifact_mount_support = true
	I1009 19:04:56.982484  161014 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1009 19:04:56.982487  161014 command_runner.go:130] > # CNI plugins.
	I1009 19:04:56.982490  161014 command_runner.go:130] > [crio.network]
	I1009 19:04:56.982496  161014 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1009 19:04:56.982501  161014 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1009 19:04:56.982507  161014 command_runner.go:130] > # cni_default_network = ""
	I1009 19:04:56.982512  161014 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1009 19:04:56.982519  161014 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1009 19:04:56.982524  161014 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1009 19:04:56.982530  161014 command_runner.go:130] > # plugin_dirs = [
	I1009 19:04:56.982533  161014 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1009 19:04:56.982536  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982540  161014 command_runner.go:130] > # List of included pod metrics.
	I1009 19:04:56.982544  161014 command_runner.go:130] > # included_pod_metrics = [
	I1009 19:04:56.982547  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982552  161014 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1009 19:04:56.982558  161014 command_runner.go:130] > [crio.metrics]
	I1009 19:04:56.982562  161014 command_runner.go:130] > # Globally enable or disable metrics support.
	I1009 19:04:56.982566  161014 command_runner.go:130] > # enable_metrics = false
	I1009 19:04:56.982570  161014 command_runner.go:130] > # Specify enabled metrics collectors.
	I1009 19:04:56.982574  161014 command_runner.go:130] > # Per default all metrics are enabled.
	I1009 19:04:56.982579  161014 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1009 19:04:56.982588  161014 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1009 19:04:56.982593  161014 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1009 19:04:56.982598  161014 command_runner.go:130] > # metrics_collectors = [
	I1009 19:04:56.982602  161014 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1009 19:04:56.982607  161014 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1009 19:04:56.982610  161014 command_runner.go:130] > # 	"containers_oom_total",
	I1009 19:04:56.982614  161014 command_runner.go:130] > # 	"processes_defunct",
	I1009 19:04:56.982617  161014 command_runner.go:130] > # 	"operations_total",
	I1009 19:04:56.982621  161014 command_runner.go:130] > # 	"operations_latency_seconds",
	I1009 19:04:56.982625  161014 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1009 19:04:56.982629  161014 command_runner.go:130] > # 	"operations_errors_total",
	I1009 19:04:56.982632  161014 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1009 19:04:56.982636  161014 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1009 19:04:56.982640  161014 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1009 19:04:56.982643  161014 command_runner.go:130] > # 	"image_pulls_success_total",
	I1009 19:04:56.982648  161014 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1009 19:04:56.982652  161014 command_runner.go:130] > # 	"containers_oom_count_total",
	I1009 19:04:56.982656  161014 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1009 19:04:56.982660  161014 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1009 19:04:56.982664  161014 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1009 19:04:56.982667  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982672  161014 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1009 19:04:56.982675  161014 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1009 19:04:56.982680  161014 command_runner.go:130] > # The port on which the metrics server will listen.
	I1009 19:04:56.982683  161014 command_runner.go:130] > # metrics_port = 9090
	I1009 19:04:56.982689  161014 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1009 19:04:56.982693  161014 command_runner.go:130] > # metrics_socket = ""
	I1009 19:04:56.982698  161014 command_runner.go:130] > # The certificate for the secure metrics server.
	I1009 19:04:56.982706  161014 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1009 19:04:56.982712  161014 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1009 19:04:56.982718  161014 command_runner.go:130] > # certificate on any modification event.
	I1009 19:04:56.982722  161014 command_runner.go:130] > # metrics_cert = ""
	I1009 19:04:56.982735  161014 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1009 19:04:56.982741  161014 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1009 19:04:56.982746  161014 command_runner.go:130] > # metrics_key = ""
	I1009 19:04:56.982753  161014 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1009 19:04:56.982758  161014 command_runner.go:130] > [crio.tracing]
	I1009 19:04:56.982766  161014 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1009 19:04:56.982771  161014 command_runner.go:130] > # enable_tracing = false
	I1009 19:04:56.982779  161014 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1009 19:04:56.982788  161014 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1009 19:04:56.982798  161014 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1009 19:04:56.982809  161014 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1009 19:04:56.982818  161014 command_runner.go:130] > # CRI-O NRI configuration.
	I1009 19:04:56.982821  161014 command_runner.go:130] > [crio.nri]
	I1009 19:04:56.982825  161014 command_runner.go:130] > # Globally enable or disable NRI.
	I1009 19:04:56.982832  161014 command_runner.go:130] > # enable_nri = true
	I1009 19:04:56.982836  161014 command_runner.go:130] > # NRI socket to listen on.
	I1009 19:04:56.982842  161014 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1009 19:04:56.982846  161014 command_runner.go:130] > # NRI plugin directory to use.
	I1009 19:04:56.982851  161014 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1009 19:04:56.982856  161014 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1009 19:04:56.982863  161014 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1009 19:04:56.982868  161014 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1009 19:04:56.982900  161014 command_runner.go:130] > # nri_disable_connections = false
	I1009 19:04:56.982908  161014 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1009 19:04:56.982912  161014 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1009 19:04:56.982916  161014 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1009 19:04:56.982920  161014 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1009 19:04:56.982926  161014 command_runner.go:130] > # NRI default validator configuration.
	I1009 19:04:56.982933  161014 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1009 19:04:56.982946  161014 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1009 19:04:56.982953  161014 command_runner.go:130] > # can be restricted/rejected:
	I1009 19:04:56.982956  161014 command_runner.go:130] > # - OCI hook injection
	I1009 19:04:56.982961  161014 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1009 19:04:56.982969  161014 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1009 19:04:56.982974  161014 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1009 19:04:56.982982  161014 command_runner.go:130] > # - adjustment of linux namespaces
	I1009 19:04:56.982988  161014 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1009 19:04:56.982996  161014 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1009 19:04:56.983002  161014 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1009 19:04:56.983007  161014 command_runner.go:130] > #
	I1009 19:04:56.983011  161014 command_runner.go:130] > # [crio.nri.default_validator]
	I1009 19:04:56.983015  161014 command_runner.go:130] > # nri_enable_default_validator = false
	I1009 19:04:56.983020  161014 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1009 19:04:56.983027  161014 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1009 19:04:56.983032  161014 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1009 19:04:56.983039  161014 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1009 19:04:56.983044  161014 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1009 19:04:56.983050  161014 command_runner.go:130] > # nri_validator_required_plugins = [
	I1009 19:04:56.983053  161014 command_runner.go:130] > # ]
	I1009 19:04:56.983058  161014 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1009 19:04:56.983066  161014 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1009 19:04:56.983069  161014 command_runner.go:130] > [crio.stats]
	I1009 19:04:56.983074  161014 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1009 19:04:56.983087  161014 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1009 19:04:56.983092  161014 command_runner.go:130] > # stats_collection_period = 0
	I1009 19:04:56.983097  161014 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1009 19:04:56.983106  161014 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1009 19:04:56.983109  161014 command_runner.go:130] > # collection_period = 0
	I1009 19:04:56.983133  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961902946Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1009 19:04:56.983143  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961928249Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1009 19:04:56.983151  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961952575Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1009 19:04:56.983160  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961969788Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1009 19:04:56.983168  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.962036562Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.983178  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.96221376Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1009 19:04:56.983187  161014 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1009 19:04:56.983250  161014 cni.go:84] Creating CNI manager for ""
	I1009 19:04:56.983259  161014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:04:56.983280  161014 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:04:56.983306  161014 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-158523 NodeName:functional-158523 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:04:56.983442  161014 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-158523"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:04:56.983504  161014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:04:56.992256  161014 command_runner.go:130] > kubeadm
	I1009 19:04:56.992278  161014 command_runner.go:130] > kubectl
	I1009 19:04:56.992282  161014 command_runner.go:130] > kubelet
	I1009 19:04:56.992304  161014 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:04:56.992347  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:04:57.000522  161014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 19:04:57.013113  161014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:04:57.026211  161014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1009 19:04:57.038776  161014 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:04:57.042573  161014 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1009 19:04:57.042649  161014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:04:57.130268  161014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:04:57.143785  161014 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523 for IP: 192.168.49.2
	I1009 19:04:57.143808  161014 certs.go:195] generating shared ca certs ...
	I1009 19:04:57.143829  161014 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.144031  161014 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:04:57.144072  161014 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:04:57.144082  161014 certs.go:257] generating profile certs ...
	I1009 19:04:57.144182  161014 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key
	I1009 19:04:57.144224  161014 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key.1809350a
	I1009 19:04:57.144260  161014 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key
	I1009 19:04:57.144272  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:04:57.144283  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:04:57.144293  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:04:57.144302  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:04:57.144314  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:04:57.144325  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:04:57.144336  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:04:57.144348  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:04:57.144426  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:04:57.144461  161014 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:04:57.144470  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:04:57.144493  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:04:57.144516  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:04:57.144537  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:04:57.144579  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:04:57.144605  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.144619  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.144631  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.145144  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:04:57.163977  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:04:57.182180  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:04:57.200741  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:04:57.219086  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:04:57.236775  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:04:57.254529  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:04:57.272276  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:04:57.290804  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:04:57.309893  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:04:57.327963  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:04:57.345810  161014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:04:57.359185  161014 ssh_runner.go:195] Run: openssl version
	I1009 19:04:57.366137  161014 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1009 19:04:57.366338  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:04:57.375985  161014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.380041  161014 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.380082  161014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.380117  161014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.415315  161014 command_runner.go:130] > b5213941
	I1009 19:04:57.415413  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:04:57.424315  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:04:57.433300  161014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.437553  161014 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.437594  161014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.437635  161014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.472859  161014 command_runner.go:130] > 51391683
	I1009 19:04:57.473177  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:04:57.481800  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:04:57.490997  161014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.494992  161014 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.495040  161014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.495095  161014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.529155  161014 command_runner.go:130] > 3ec20f2e
	I1009 19:04:57.529240  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:04:57.537710  161014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:04:57.541624  161014 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:04:57.541645  161014 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1009 19:04:57.541653  161014 command_runner.go:130] > Device: 8,1	Inode: 573939      Links: 1
	I1009 19:04:57.541662  161014 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 19:04:57.541679  161014 command_runner.go:130] > Access: 2025-10-09 19:00:49.271404553 +0000
	I1009 19:04:57.541690  161014 command_runner.go:130] > Modify: 2025-10-09 18:56:44.405307509 +0000
	I1009 19:04:57.541704  161014 command_runner.go:130] > Change: 2025-10-09 18:56:44.405307509 +0000
	I1009 19:04:57.541714  161014 command_runner.go:130] >  Birth: 2025-10-09 18:56:44.405307509 +0000
	I1009 19:04:57.541773  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:04:57.576034  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.576418  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:04:57.610746  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.611106  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:04:57.645558  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.645650  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:04:57.680926  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.681269  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:04:57.716681  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.716965  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:04:57.752444  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.752733  161014 kubeadm.go:400] StartCluster: {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:04:57.752827  161014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:04:57.752877  161014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:04:57.781930  161014 cri.go:89] found id: ""
	I1009 19:04:57.782002  161014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:04:57.790396  161014 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1009 19:04:57.790421  161014 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1009 19:04:57.790427  161014 command_runner.go:130] > /var/lib/minikube/etcd:
	I1009 19:04:57.790446  161014 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:04:57.790453  161014 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:04:57.790499  161014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:04:57.798150  161014 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:04:57.798252  161014 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-158523" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.798307  161014 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-137890/kubeconfig needs updating (will repair): [kubeconfig missing "functional-158523" cluster setting kubeconfig missing "functional-158523" context setting]
	I1009 19:04:57.798648  161014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.799428  161014 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.799625  161014 kapi.go:59] client config for functional-158523: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:04:57.800169  161014 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:04:57.800185  161014 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:04:57.800191  161014 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:04:57.800195  161014 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:04:57.800199  161014 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:04:57.800257  161014 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:04:57.800663  161014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:04:57.808677  161014 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:04:57.808712  161014 kubeadm.go:601] duration metric: took 18.25382ms to restartPrimaryControlPlane
	I1009 19:04:57.808720  161014 kubeadm.go:402] duration metric: took 56.001565ms to StartCluster
	I1009 19:04:57.808736  161014 settings.go:142] acquiring lock: {Name:mk34466033fb866bc8e167d2d953624ad0802283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.808837  161014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.809418  161014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.809652  161014 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:04:57.809720  161014 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:04:57.809869  161014 addons.go:69] Setting storage-provisioner=true in profile "functional-158523"
	I1009 19:04:57.809882  161014 addons.go:69] Setting default-storageclass=true in profile "functional-158523"
	I1009 19:04:57.809890  161014 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:57.809907  161014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-158523"
	I1009 19:04:57.809888  161014 addons.go:238] Setting addon storage-provisioner=true in "functional-158523"
	I1009 19:04:57.809999  161014 host.go:66] Checking if "functional-158523" exists ...
	I1009 19:04:57.810265  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:57.810325  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:57.815899  161014 out.go:179] * Verifying Kubernetes components...
	I1009 19:04:57.817259  161014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:04:57.830319  161014 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.830565  161014 kapi.go:59] client config for functional-158523: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:04:57.830893  161014 addons.go:238] Setting addon default-storageclass=true in "functional-158523"
	I1009 19:04:57.830936  161014 host.go:66] Checking if "functional-158523" exists ...
	I1009 19:04:57.831444  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:57.831697  161014 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:04:57.833512  161014 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:57.833530  161014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:04:57.833580  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:57.856284  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:57.858504  161014 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:57.858545  161014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:04:57.858618  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:57.879618  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:57.916522  161014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:04:57.930660  161014 node_ready.go:35] waiting up to 6m0s for node "functional-158523" to be "Ready" ...
	I1009 19:04:57.930861  161014 type.go:168] "Request Body" body=""
	I1009 19:04:57.930944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:57.931232  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:57.969596  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:57.988544  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:58.026986  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.027037  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.027061  161014 retry.go:31] will retry after 164.488016ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.047051  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.047098  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.047116  161014 retry.go:31] will retry after 194.483244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.192480  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:58.242329  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:58.247629  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.247684  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.247711  161014 retry.go:31] will retry after 217.861079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.297775  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.297841  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.297866  161014 retry.go:31] will retry after 198.924996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.431075  161014 type.go:168] "Request Body" body=""
	I1009 19:04:58.431155  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:58.431537  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:58.466794  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:58.497509  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:58.521187  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.524476  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.524506  161014 retry.go:31] will retry after 579.961825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.549062  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.552103  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.552134  161014 retry.go:31] will retry after 574.521259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.930944  161014 type.go:168] "Request Body" body=""
	I1009 19:04:58.931032  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:58.931452  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:59.104703  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:59.127368  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:59.161080  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:59.161136  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.161156  161014 retry.go:31] will retry after 734.839127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.184025  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:59.184076  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.184098  161014 retry.go:31] will retry after 1.025268007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.431572  161014 type.go:168] "Request Body" body=""
	I1009 19:04:59.431684  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:59.432074  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:59.896539  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:59.931433  161014 type.go:168] "Request Body" body=""
	I1009 19:04:59.931506  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:59.931837  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:04:59.931910  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:04:59.949186  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:59.952452  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.952481  161014 retry.go:31] will retry after 1.084602838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:00.209882  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:00.262148  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:00.265292  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:00.265336  161014 retry.go:31] will retry after 1.287073207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:00.431721  161014 type.go:168] "Request Body" body=""
	I1009 19:05:00.431804  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:00.432145  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:00.931797  161014 type.go:168] "Request Body" body=""
	I1009 19:05:00.931880  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:00.932240  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:01.037525  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:01.094236  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:01.094283  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.094304  161014 retry.go:31] will retry after 1.546934371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.431777  161014 type.go:168] "Request Body" body=""
	I1009 19:05:01.431854  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:01.432251  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:01.553547  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:01.609996  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:01.610065  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.610089  161014 retry.go:31] will retry after 1.923829662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.931551  161014 type.go:168] "Request Body" body=""
	I1009 19:05:01.931629  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:01.931969  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:01.932040  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:02.431907  161014 type.go:168] "Request Body" body=""
	I1009 19:05:02.431987  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:02.432358  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:02.641614  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:02.696762  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:02.699844  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:02.699873  161014 retry.go:31] will retry after 2.36633365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:02.931226  161014 type.go:168] "Request Body" body=""
	I1009 19:05:02.931306  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:02.931737  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:03.431598  161014 type.go:168] "Request Body" body=""
	I1009 19:05:03.431682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:03.432054  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:03.534329  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:03.590565  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:03.590611  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:03.590631  161014 retry.go:31] will retry after 1.952860092s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:03.931329  161014 type.go:168] "Request Body" body=""
	I1009 19:05:03.931427  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:03.931807  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:04.431531  161014 type.go:168] "Request Body" body=""
	I1009 19:05:04.431620  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:04.432004  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:04.432087  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:04.931914  161014 type.go:168] "Request Body" body=""
	I1009 19:05:04.931993  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:04.932341  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:05.066624  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:05.119719  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:05.123044  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.123086  161014 retry.go:31] will retry after 6.108852521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.431602  161014 type.go:168] "Request Body" body=""
	I1009 19:05:05.431719  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:05.432158  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:05.544481  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:05.597312  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:05.600803  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.600837  161014 retry.go:31] will retry after 3.364758217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.931296  161014 type.go:168] "Request Body" body=""
	I1009 19:05:05.931418  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:05.931808  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:06.431397  161014 type.go:168] "Request Body" body=""
	I1009 19:05:06.431479  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:06.431873  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:06.931533  161014 type.go:168] "Request Body" body=""
	I1009 19:05:06.931626  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:06.932024  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:06.932104  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:07.431687  161014 type.go:168] "Request Body" body=""
	I1009 19:05:07.431779  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:07.432140  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:07.930948  161014 type.go:168] "Request Body" body=""
	I1009 19:05:07.931031  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:07.931436  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:08.431020  161014 type.go:168] "Request Body" body=""
	I1009 19:05:08.431105  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:08.431489  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:08.931423  161014 type.go:168] "Request Body" body=""
	I1009 19:05:08.931528  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:08.931995  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:08.966195  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:09.019582  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:09.022605  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:09.022645  161014 retry.go:31] will retry after 7.771885559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:09.431187  161014 type.go:168] "Request Body" body=""
	I1009 19:05:09.431265  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:09.431662  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:09.431745  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:09.931552  161014 type.go:168] "Request Body" body=""
	I1009 19:05:09.931635  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:09.931979  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:10.431855  161014 type.go:168] "Request Body" body=""
	I1009 19:05:10.431945  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:10.432274  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:10.931110  161014 type.go:168] "Request Body" body=""
	I1009 19:05:10.931203  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:10.931572  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:11.233030  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:11.288902  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:11.288953  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:11.288975  161014 retry.go:31] will retry after 3.345246752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:11.431308  161014 type.go:168] "Request Body" body=""
	I1009 19:05:11.431402  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:11.431749  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:11.431819  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:11.931642  161014 type.go:168] "Request Body" body=""
	I1009 19:05:11.931749  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:11.932113  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:12.430947  161014 type.go:168] "Request Body" body=""
	I1009 19:05:12.431034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:12.431445  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:12.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:05:12.931315  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:12.931711  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:13.431639  161014 type.go:168] "Request Body" body=""
	I1009 19:05:13.431724  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:13.432088  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:13.432151  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:13.930962  161014 type.go:168] "Request Body" body=""
	I1009 19:05:13.931048  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:13.931440  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:14.431210  161014 type.go:168] "Request Body" body=""
	I1009 19:05:14.431296  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:14.431700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:14.635101  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:14.689463  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:14.692943  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:14.692988  161014 retry.go:31] will retry after 8.426490786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:14.931454  161014 type.go:168] "Request Body" body=""
	I1009 19:05:14.931531  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:14.931912  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:15.431649  161014 type.go:168] "Request Body" body=""
	I1009 19:05:15.431767  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:15.432139  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:15.432244  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:15.931808  161014 type.go:168] "Request Body" body=""
	I1009 19:05:15.931885  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:15.932226  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:16.430935  161014 type.go:168] "Request Body" body=""
	I1009 19:05:16.431026  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:16.431417  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:16.794854  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:16.849041  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:16.852200  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:16.852234  161014 retry.go:31] will retry after 11.902123756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:16.931535  161014 type.go:168] "Request Body" body=""
	I1009 19:05:16.931634  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:16.931986  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:17.431870  161014 type.go:168] "Request Body" body=""
	I1009 19:05:17.431977  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:17.432410  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:17.432479  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:17.931194  161014 type.go:168] "Request Body" body=""
	I1009 19:05:17.931301  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:17.931659  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:18.431314  161014 type.go:168] "Request Body" body=""
	I1009 19:05:18.431420  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:18.431851  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:18.931802  161014 type.go:168] "Request Body" body=""
	I1009 19:05:18.931891  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:18.932247  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:19.431889  161014 type.go:168] "Request Body" body=""
	I1009 19:05:19.431978  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:19.432365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:19.930982  161014 type.go:168] "Request Body" body=""
	I1009 19:05:19.931070  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:19.931472  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:19.931543  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:20.431080  161014 type.go:168] "Request Body" body=""
	I1009 19:05:20.431159  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:20.431505  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:20.931084  161014 type.go:168] "Request Body" body=""
	I1009 19:05:20.931162  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:20.931465  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:21.431126  161014 type.go:168] "Request Body" body=""
	I1009 19:05:21.431210  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:21.431583  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:21.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:05:21.931310  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:21.931673  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:21.931757  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:22.431250  161014 type.go:168] "Request Body" body=""
	I1009 19:05:22.431335  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:22.431706  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:22.931281  161014 type.go:168] "Request Body" body=""
	I1009 19:05:22.931373  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:22.931764  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:23.120080  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:23.178288  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:23.178344  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:23.178369  161014 retry.go:31] will retry after 12.554942652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:23.431791  161014 type.go:168] "Request Body" body=""
	I1009 19:05:23.431875  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:23.432227  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:23.931667  161014 type.go:168] "Request Body" body=""
	I1009 19:05:23.931758  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:23.932103  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:23.932167  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:24.431140  161014 type.go:168] "Request Body" body=""
	I1009 19:05:24.431223  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:24.431607  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:24.931219  161014 type.go:168] "Request Body" body=""
	I1009 19:05:24.931297  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:24.931656  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:25.431282  161014 type.go:168] "Request Body" body=""
	I1009 19:05:25.431369  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:25.431754  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:25.931371  161014 type.go:168] "Request Body" body=""
	I1009 19:05:25.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:25.931852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:26.431721  161014 type.go:168] "Request Body" body=""
	I1009 19:05:26.431805  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:26.432173  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:26.432243  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:26.931895  161014 type.go:168] "Request Body" body=""
	I1009 19:05:26.931982  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:26.932327  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:27.430978  161014 type.go:168] "Request Body" body=""
	I1009 19:05:27.431069  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:27.431440  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:27.931122  161014 type.go:168] "Request Body" body=""
	I1009 19:05:27.931203  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:27.931568  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:28.431156  161014 type.go:168] "Request Body" body=""
	I1009 19:05:28.431240  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:28.431629  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:28.755128  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:28.809181  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:28.812331  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:28.812369  161014 retry.go:31] will retry after 17.899546939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:28.931943  161014 type.go:168] "Request Body" body=""
	I1009 19:05:28.932042  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:28.932423  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:28.932495  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:29.431031  161014 type.go:168] "Request Body" body=""
	I1009 19:05:29.431117  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:29.431488  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:29.931112  161014 type.go:168] "Request Body" body=""
	I1009 19:05:29.931191  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:29.931549  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:30.431108  161014 type.go:168] "Request Body" body=""
	I1009 19:05:30.431184  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:30.431580  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:30.931234  161014 type.go:168] "Request Body" body=""
	I1009 19:05:30.931310  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:30.931701  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:31.431358  161014 type.go:168] "Request Body" body=""
	I1009 19:05:31.431464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:31.431883  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:31.431968  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:31.931551  161014 type.go:168] "Request Body" body=""
	I1009 19:05:31.931654  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:31.932150  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:32.431843  161014 type.go:168] "Request Body" body=""
	I1009 19:05:32.431928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:32.432325  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:32.930923  161014 type.go:168] "Request Body" body=""
	I1009 19:05:32.931009  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:32.931419  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:33.431049  161014 type.go:168] "Request Body" body=""
	I1009 19:05:33.431139  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:33.431539  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:33.931442  161014 type.go:168] "Request Body" body=""
	I1009 19:05:33.931529  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:33.931921  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:33.931994  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:34.431615  161014 type.go:168] "Request Body" body=""
	I1009 19:05:34.431709  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:34.432084  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:34.931772  161014 type.go:168] "Request Body" body=""
	I1009 19:05:34.931853  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:34.932239  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:35.431990  161014 type.go:168] "Request Body" body=""
	I1009 19:05:35.432083  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:35.432473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:35.733912  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:35.787306  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:35.790843  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:35.790879  161014 retry.go:31] will retry after 31.721699669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:35.931334  161014 type.go:168] "Request Body" body=""
	I1009 19:05:35.931474  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:35.931860  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:36.431788  161014 type.go:168] "Request Body" body=""
	I1009 19:05:36.431878  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:36.432242  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:36.432309  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:36.931065  161014 type.go:168] "Request Body" body=""
	I1009 19:05:36.931156  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:36.931540  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:37.431314  161014 type.go:168] "Request Body" body=""
	I1009 19:05:37.431439  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:37.431797  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:37.931603  161014 type.go:168] "Request Body" body=""
	I1009 19:05:37.931697  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:37.932081  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:38.431682  161014 type.go:168] "Request Body" body=""
	I1009 19:05:38.431775  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:38.432127  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:38.930953  161014 type.go:168] "Request Body" body=""
	I1009 19:05:38.931049  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:38.931414  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:38.931498  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:39.430956  161014 type.go:168] "Request Body" body=""
	I1009 19:05:39.431070  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:39.431453  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:39.931034  161014 type.go:168] "Request Body" body=""
	I1009 19:05:39.931145  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:39.931490  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:40.431075  161014 type.go:168] "Request Body" body=""
	I1009 19:05:40.431166  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:40.431582  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:40.931249  161014 type.go:168] "Request Body" body=""
	I1009 19:05:40.931336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:40.931693  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:40.931756  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:41.431331  161014 type.go:168] "Request Body" body=""
	I1009 19:05:41.431437  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:41.431805  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:41.931445  161014 type.go:168] "Request Body" body=""
	I1009 19:05:41.931535  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:41.931928  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:42.431625  161014 type.go:168] "Request Body" body=""
	I1009 19:05:42.431705  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:42.432064  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:42.931717  161014 type.go:168] "Request Body" body=""
	I1009 19:05:42.931803  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:42.932175  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:42.932247  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:43.430857  161014 type.go:168] "Request Body" body=""
	I1009 19:05:43.430971  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:43.431317  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:43.931149  161014 type.go:168] "Request Body" body=""
	I1009 19:05:43.931232  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:43.931588  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:44.431181  161014 type.go:168] "Request Body" body=""
	I1009 19:05:44.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:44.431639  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:44.931222  161014 type.go:168] "Request Body" body=""
	I1009 19:05:44.931306  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:44.931692  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:45.431277  161014 type.go:168] "Request Body" body=""
	I1009 19:05:45.431360  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:45.431736  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:45.431802  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:45.931357  161014 type.go:168] "Request Body" body=""
	I1009 19:05:45.931462  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:45.931838  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:46.431506  161014 type.go:168] "Request Body" body=""
	I1009 19:05:46.431593  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:46.431956  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:46.712449  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:46.768626  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:46.768679  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:46.768704  161014 retry.go:31] will retry after 25.41172348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:46.930938  161014 type.go:168] "Request Body" body=""
	I1009 19:05:46.931055  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:46.931460  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:47.431076  161014 type.go:168] "Request Body" body=""
	I1009 19:05:47.431153  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:47.431556  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:47.931415  161014 type.go:168] "Request Body" body=""
	I1009 19:05:47.931510  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:47.931879  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:47.931959  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:48.431674  161014 type.go:168] "Request Body" body=""
	I1009 19:05:48.431759  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:48.432094  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:48.930896  161014 type.go:168] "Request Body" body=""
	I1009 19:05:48.931001  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:48.931373  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:49.430996  161014 type.go:168] "Request Body" body=""
	I1009 19:05:49.431076  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:49.431465  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:49.931262  161014 type.go:168] "Request Body" body=""
	I1009 19:05:49.931370  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:49.931789  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:50.431699  161014 type.go:168] "Request Body" body=""
	I1009 19:05:50.431782  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:50.432126  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:50.432204  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:50.930957  161014 type.go:168] "Request Body" body=""
	I1009 19:05:50.931084  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:50.931482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:51.431250  161014 type.go:168] "Request Body" body=""
	I1009 19:05:51.431347  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:51.431706  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:51.931601  161014 type.go:168] "Request Body" body=""
	I1009 19:05:51.931698  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:51.932063  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:52.430862  161014 type.go:168] "Request Body" body=""
	I1009 19:05:52.430953  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:52.431298  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:52.931085  161014 type.go:168] "Request Body" body=""
	I1009 19:05:52.931179  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:52.931557  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:52.931624  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:53.431339  161014 type.go:168] "Request Body" body=""
	I1009 19:05:53.431459  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:53.431829  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:53.931637  161014 type.go:168] "Request Body" body=""
	I1009 19:05:53.931736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:53.932120  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:54.430920  161014 type.go:168] "Request Body" body=""
	I1009 19:05:54.431014  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:54.431426  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:54.931205  161014 type.go:168] "Request Body" body=""
	I1009 19:05:54.931299  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:54.931695  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:54.931776  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:55.431596  161014 type.go:168] "Request Body" body=""
	I1009 19:05:55.431674  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:55.432023  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:55.931858  161014 type.go:168] "Request Body" body=""
	I1009 19:05:55.931949  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:55.932317  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:56.431017  161014 type.go:168] "Request Body" body=""
	I1009 19:05:56.431102  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:56.431477  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:56.931242  161014 type.go:168] "Request Body" body=""
	I1009 19:05:56.931328  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:56.931740  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:56.931822  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:57.431701  161014 type.go:168] "Request Body" body=""
	I1009 19:05:57.431787  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:57.432169  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:57.931004  161014 type.go:168] "Request Body" body=""
	I1009 19:05:57.931088  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:57.931492  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:58.430896  161014 type.go:168] "Request Body" body=""
	I1009 19:05:58.430977  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:58.431316  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:58.931136  161014 type.go:168] "Request Body" body=""
	I1009 19:05:58.931305  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:58.931701  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:59.431527  161014 type.go:168] "Request Body" body=""
	I1009 19:05:59.431619  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:59.431986  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:59.432056  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:59.931914  161014 type.go:168] "Request Body" body=""
	I1009 19:05:59.932022  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:59.932451  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:00.431235  161014 type.go:168] "Request Body" body=""
	I1009 19:06:00.431321  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:00.431700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:00.931491  161014 type.go:168] "Request Body" body=""
	I1009 19:06:00.931598  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:00.932038  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:01.430870  161014 type.go:168] "Request Body" body=""
	I1009 19:06:01.430962  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:01.431351  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:01.931160  161014 type.go:168] "Request Body" body=""
	I1009 19:06:01.931259  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:01.931701  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:01.931781  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:02.431642  161014 type.go:168] "Request Body" body=""
	I1009 19:06:02.431741  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:02.432105  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:02.930912  161014 type.go:168] "Request Body" body=""
	I1009 19:06:02.931026  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:02.931440  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:03.431233  161014 type.go:168] "Request Body" body=""
	I1009 19:06:03.431316  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:03.431698  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:03.931548  161014 type.go:168] "Request Body" body=""
	I1009 19:06:03.931627  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:03.932000  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:03.932085  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:04.431884  161014 type.go:168] "Request Body" body=""
	I1009 19:06:04.431963  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:04.432329  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:04.931167  161014 type.go:168] "Request Body" body=""
	I1009 19:06:04.931256  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:04.931675  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:05.431519  161014 type.go:168] "Request Body" body=""
	I1009 19:06:05.431593  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:05.431983  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:05.931927  161014 type.go:168] "Request Body" body=""
	I1009 19:06:05.932019  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:05.932421  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:05.932517  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:06.431278  161014 type.go:168] "Request Body" body=""
	I1009 19:06:06.431359  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:06.431798  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:06.931667  161014 type.go:168] "Request Body" body=""
	I1009 19:06:06.931753  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:06.932149  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:07.430942  161014 type.go:168] "Request Body" body=""
	I1009 19:06:07.431028  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:07.431419  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:07.513672  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:06:07.571073  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:07.571125  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:07.571145  161014 retry.go:31] will retry after 23.39838606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:07.931687  161014 type.go:168] "Request Body" body=""
	I1009 19:06:07.931784  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:07.932135  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:08.430924  161014 type.go:168] "Request Body" body=""
	I1009 19:06:08.431034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:08.431403  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:08.431469  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:08.931208  161014 type.go:168] "Request Body" body=""
	I1009 19:06:08.931288  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:08.931643  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:09.431520  161014 type.go:168] "Request Body" body=""
	I1009 19:06:09.431629  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:09.432018  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:09.931868  161014 type.go:168] "Request Body" body=""
	I1009 19:06:09.931945  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:09.932304  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:10.431151  161014 type.go:168] "Request Body" body=""
	I1009 19:06:10.431248  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:10.431669  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:10.431738  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:10.931500  161014 type.go:168] "Request Body" body=""
	I1009 19:06:10.931584  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:10.931948  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:11.431952  161014 type.go:168] "Request Body" body=""
	I1009 19:06:11.432052  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:11.432455  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:11.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:06:11.931310  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:11.931689  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:12.181131  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:06:12.238294  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:12.238358  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:12.238405  161014 retry.go:31] will retry after 21.481583015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:12.431681  161014 type.go:168] "Request Body" body=""
	I1009 19:06:12.431761  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:12.432057  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:12.432128  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:12.931845  161014 type.go:168] "Request Body" body=""
	I1009 19:06:12.931939  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:12.932415  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:13.431004  161014 type.go:168] "Request Body" body=""
	I1009 19:06:13.431117  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:13.431483  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:13.931308  161014 type.go:168] "Request Body" body=""
	I1009 19:06:13.931404  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:13.931807  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:14.431415  161014 type.go:168] "Request Body" body=""
	I1009 19:06:14.431502  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:14.431906  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:14.931635  161014 type.go:168] "Request Body" body=""
	I1009 19:06:14.931725  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:14.932138  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:14.932205  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:15.431840  161014 type.go:168] "Request Body" body=""
	I1009 19:06:15.431927  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:15.432292  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:15.930896  161014 type.go:168] "Request Body" body=""
	I1009 19:06:15.930996  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:15.931404  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:16.431000  161014 type.go:168] "Request Body" body=""
	I1009 19:06:16.431088  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:16.431482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:16.931089  161014 type.go:168] "Request Body" body=""
	I1009 19:06:16.931169  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:16.931606  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:17.431187  161014 type.go:168] "Request Body" body=""
	I1009 19:06:17.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:17.431645  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:17.431717  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:17.931505  161014 type.go:168] "Request Body" body=""
	I1009 19:06:17.931588  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:17.931977  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:18.431663  161014 type.go:168] "Request Body" body=""
	I1009 19:06:18.431753  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:18.432133  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:18.931039  161014 type.go:168] "Request Body" body=""
	I1009 19:06:18.931125  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:18.931496  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:19.431026  161014 type.go:168] "Request Body" body=""
	I1009 19:06:19.431101  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:19.431425  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:19.931079  161014 type.go:168] "Request Body" body=""
	I1009 19:06:19.931160  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:19.931533  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:19.931605  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:20.431142  161014 type.go:168] "Request Body" body=""
	I1009 19:06:20.431225  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:20.431606  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:20.931205  161014 type.go:168] "Request Body" body=""
	I1009 19:06:20.931288  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:20.931685  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:21.431270  161014 type.go:168] "Request Body" body=""
	I1009 19:06:21.431352  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:21.431727  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:21.931351  161014 type.go:168] "Request Body" body=""
	I1009 19:06:21.931459  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:21.931867  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:21.931960  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:22.431630  161014 type.go:168] "Request Body" body=""
	I1009 19:06:22.431720  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:22.432112  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:22.931909  161014 type.go:168] "Request Body" body=""
	I1009 19:06:22.932006  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:22.932466  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:23.431019  161014 type.go:168] "Request Body" body=""
	I1009 19:06:23.431108  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:23.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:23.931352  161014 type.go:168] "Request Body" body=""
	I1009 19:06:23.931464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:23.931866  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:24.430863  161014 type.go:168] "Request Body" body=""
	I1009 19:06:24.430951  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:24.431355  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:24.431478  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:24.930971  161014 type.go:168] "Request Body" body=""
	I1009 19:06:24.931061  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:24.931476  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:25.431052  161014 type.go:168] "Request Body" body=""
	I1009 19:06:25.431143  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:25.431497  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:25.931072  161014 type.go:168] "Request Body" body=""
	I1009 19:06:25.931164  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:25.931594  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:26.430916  161014 type.go:168] "Request Body" body=""
	I1009 19:06:26.431010  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:26.431407  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:26.931057  161014 type.go:168] "Request Body" body=""
	I1009 19:06:26.931135  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:26.931533  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:26.931610  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:27.431142  161014 type.go:168] "Request Body" body=""
	I1009 19:06:27.431220  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:27.431622  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:27.931665  161014 type.go:168] "Request Body" body=""
	I1009 19:06:27.931758  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:27.932163  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:28.431861  161014 type.go:168] "Request Body" body=""
	I1009 19:06:28.431949  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:28.432310  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:28.931285  161014 type.go:168] "Request Body" body=""
	I1009 19:06:28.931362  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:28.931821  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:28.931892  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:29.431462  161014 type.go:168] "Request Body" body=""
	I1009 19:06:29.431547  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:29.432004  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:29.931701  161014 type.go:168] "Request Body" body=""
	I1009 19:06:29.931782  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:29.932180  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:30.431935  161014 type.go:168] "Request Body" body=""
	I1009 19:06:30.432026  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:30.432405  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:30.931026  161014 type.go:168] "Request Body" body=""
	I1009 19:06:30.931109  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:30.931522  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:30.970755  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:06:31.028107  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:31.028174  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:31.028309  161014 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:06:31.431764  161014 type.go:168] "Request Body" body=""
	I1009 19:06:31.431853  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:31.432208  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:31.432284  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:31.930867  161014 type.go:168] "Request Body" body=""
	I1009 19:06:31.930984  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:31.931336  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:32.430958  161014 type.go:168] "Request Body" body=""
	I1009 19:06:32.431047  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:32.431465  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:32.931031  161014 type.go:168] "Request Body" body=""
	I1009 19:06:32.931127  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:32.931496  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:33.431116  161014 type.go:168] "Request Body" body=""
	I1009 19:06:33.431195  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:33.431601  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:33.721082  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:06:33.781514  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:33.781597  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:33.781723  161014 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:06:33.784570  161014 out.go:179] * Enabled addons: 
	I1009 19:06:33.786444  161014 addons.go:514] duration metric: took 1m35.976729521s for enable addons: enabled=[]
	I1009 19:06:33.931193  161014 type.go:168] "Request Body" body=""
	I1009 19:06:33.931298  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:33.931708  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:33.931785  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:34.431608  161014 type.go:168] "Request Body" body=""
	I1009 19:06:34.431688  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:34.432050  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:34.931894  161014 type.go:168] "Request Body" body=""
	I1009 19:06:34.931982  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:34.932369  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:35.431177  161014 type.go:168] "Request Body" body=""
	I1009 19:06:35.431261  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:35.431656  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:35.931508  161014 type.go:168] "Request Body" body=""
	I1009 19:06:35.931632  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:35.932017  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:35.932080  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:36.431933  161014 type.go:168] "Request Body" body=""
	I1009 19:06:36.432042  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:36.432446  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:36.931225  161014 type.go:168] "Request Body" body=""
	I1009 19:06:36.931328  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:36.931704  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:37.431625  161014 type.go:168] "Request Body" body=""
	I1009 19:06:37.431738  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:37.432141  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:37.930857  161014 type.go:168] "Request Body" body=""
	I1009 19:06:37.930995  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:37.931342  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:38.431133  161014 type.go:168] "Request Body" body=""
	I1009 19:06:38.431214  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:38.431597  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:38.431683  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:38.931462  161014 type.go:168] "Request Body" body=""
	I1009 19:06:38.931563  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:38.931971  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:39.431871  161014 type.go:168] "Request Body" body=""
	I1009 19:06:39.431963  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:39.432315  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:39.931128  161014 type.go:168] "Request Body" body=""
	I1009 19:06:39.931220  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:39.931618  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:40.431437  161014 type.go:168] "Request Body" body=""
	I1009 19:06:40.431514  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:40.431890  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:40.431961  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:40.931810  161014 type.go:168] "Request Body" body=""
	I1009 19:06:40.931912  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:40.932290  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:41.431100  161014 type.go:168] "Request Body" body=""
	I1009 19:06:41.431218  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:41.431599  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:41.931346  161014 type.go:168] "Request Body" body=""
	I1009 19:06:41.931468  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:41.931837  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:42.431757  161014 type.go:168] "Request Body" body=""
	I1009 19:06:42.431845  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:42.432237  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:42.432298  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:42.931019  161014 type.go:168] "Request Body" body=""
	I1009 19:06:42.931113  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:42.931521  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:43.431303  161014 type.go:168] "Request Body" body=""
	I1009 19:06:43.431415  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:43.431782  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:43.931780  161014 type.go:168] "Request Body" body=""
	I1009 19:06:43.931864  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:43.932272  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:44.431107  161014 type.go:168] "Request Body" body=""
	I1009 19:06:44.431212  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:44.431609  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:44.931522  161014 type.go:168] "Request Body" body=""
	I1009 19:06:44.931612  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:44.932005  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:44.932091  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:45.430863  161014 type.go:168] "Request Body" body=""
	I1009 19:06:45.430955  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:45.431333  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:45.931195  161014 type.go:168] "Request Body" body=""
	I1009 19:06:45.931296  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:45.931727  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:46.431598  161014 type.go:168] "Request Body" body=""
	I1009 19:06:46.431682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:46.432089  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:46.930909  161014 type.go:168] "Request Body" body=""
	I1009 19:06:46.931014  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:46.931410  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:47.431166  161014 type.go:168] "Request Body" body=""
	I1009 19:06:47.431244  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:47.431610  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:47.431679  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:47.931409  161014 type.go:168] "Request Body" body=""
	I1009 19:06:47.931495  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:47.931852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:48.431707  161014 type.go:168] "Request Body" body=""
	I1009 19:06:48.431803  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:48.432224  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:48.931113  161014 type.go:168] "Request Body" body=""
	I1009 19:06:48.931196  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:48.931590  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:49.431438  161014 type.go:168] "Request Body" body=""
	I1009 19:06:49.431532  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:49.431933  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:49.432014  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:49.931847  161014 type.go:168] "Request Body" body=""
	I1009 19:06:49.931955  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:49.932365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:50.431151  161014 type.go:168] "Request Body" body=""
	I1009 19:06:50.431252  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:50.431731  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:50.931584  161014 type.go:168] "Request Body" body=""
	I1009 19:06:50.931668  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:50.932034  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:51.431892  161014 type.go:168] "Request Body" body=""
	I1009 19:06:51.432001  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:51.432357  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:51.432451  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:51.931169  161014 type.go:168] "Request Body" body=""
	I1009 19:06:51.931251  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:51.931649  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:52.431585  161014 type.go:168] "Request Body" body=""
	I1009 19:06:52.431683  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:52.432058  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:52.931903  161014 type.go:168] "Request Body" body=""
	I1009 19:06:52.931994  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:52.932365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:53.431140  161014 type.go:168] "Request Body" body=""
	I1009 19:06:53.431240  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:53.431607  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:53.931515  161014 type.go:168] "Request Body" body=""
	I1009 19:06:53.931602  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:53.931970  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:53.932045  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:54.431874  161014 type.go:168] "Request Body" body=""
	I1009 19:06:54.431956  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:54.432333  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:54.931110  161014 type.go:168] "Request Body" body=""
	I1009 19:06:54.931191  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:54.931572  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:55.431313  161014 type.go:168] "Request Body" body=""
	I1009 19:06:55.431422  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:55.431759  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:55.931628  161014 type.go:168] "Request Body" body=""
	I1009 19:06:55.931708  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:55.932052  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:55.932122  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:56.430861  161014 type.go:168] "Request Body" body=""
	I1009 19:06:56.430953  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:56.431299  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:56.931073  161014 type.go:168] "Request Body" body=""
	I1009 19:06:56.931162  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:56.931537  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:57.431318  161014 type.go:168] "Request Body" body=""
	I1009 19:06:57.431417  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:57.431759  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:57.931618  161014 type.go:168] "Request Body" body=""
	I1009 19:06:57.931839  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:57.932218  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:57.932279  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:58.431144  161014 type.go:168] "Request Body" body=""
	I1009 19:06:58.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:58.431970  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:58.931861  161014 type.go:168] "Request Body" body=""
	I1009 19:06:58.931940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:58.932311  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:59.431143  161014 type.go:168] "Request Body" body=""
	I1009 19:06:59.431223  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:59.431592  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:59.930909  161014 type.go:168] "Request Body" body=""
	I1009 19:06:59.931020  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:59.931371  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:00.430999  161014 type.go:168] "Request Body" body=""
	I1009 19:07:00.431081  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:00.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:00.431566  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:00.931093  161014 type.go:168] "Request Body" body=""
	I1009 19:07:00.931180  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:00.931581  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:01.431360  161014 type.go:168] "Request Body" body=""
	I1009 19:07:01.431464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:01.431832  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:01.931692  161014 type.go:168] "Request Body" body=""
	I1009 19:07:01.931784  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:01.932184  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:02.430934  161014 type.go:168] "Request Body" body=""
	I1009 19:07:02.431018  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:02.431378  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:02.931191  161014 type.go:168] "Request Body" body=""
	I1009 19:07:02.931275  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:02.931689  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:02.931756  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:03.431523  161014 type.go:168] "Request Body" body=""
	I1009 19:07:03.431604  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:03.431991  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:03.930871  161014 type.go:168] "Request Body" body=""
	I1009 19:07:03.930969  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:03.931407  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:04.431200  161014 type.go:168] "Request Body" body=""
	I1009 19:07:04.431281  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:04.431686  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:04.931603  161014 type.go:168] "Request Body" body=""
	I1009 19:07:04.931684  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:04.932085  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:04.932154  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:05.430888  161014 type.go:168] "Request Body" body=""
	I1009 19:07:05.430980  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:05.431365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:05.931176  161014 type.go:168] "Request Body" body=""
	I1009 19:07:05.931266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:05.931718  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:06.431607  161014 type.go:168] "Request Body" body=""
	I1009 19:07:06.431688  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:06.432075  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:06.930900  161014 type.go:168] "Request Body" body=""
	I1009 19:07:06.931004  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:06.931420  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:07.431211  161014 type.go:168] "Request Body" body=""
	I1009 19:07:07.431297  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:07.431674  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:07.431738  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:07.931521  161014 type.go:168] "Request Body" body=""
	I1009 19:07:07.931600  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:07.931988  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:08.431938  161014 type.go:168] "Request Body" body=""
	I1009 19:07:08.432023  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:08.432368  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:08.931198  161014 type.go:168] "Request Body" body=""
	I1009 19:07:08.931276  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:08.931670  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:09.431634  161014 type.go:168] "Request Body" body=""
	I1009 19:07:09.431764  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:09.432193  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:09.432271  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:09.931021  161014 type.go:168] "Request Body" body=""
	I1009 19:07:09.931112  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:09.931511  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:10.431319  161014 type.go:168] "Request Body" body=""
	I1009 19:07:10.431421  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:10.431776  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:10.931586  161014 type.go:168] "Request Body" body=""
	I1009 19:07:10.931675  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:10.932028  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:11.431928  161014 type.go:168] "Request Body" body=""
	I1009 19:07:11.432018  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:11.432409  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:11.432493  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:11.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:07:11.931314  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:11.931691  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:12.431493  161014 type.go:168] "Request Body" body=""
	I1009 19:07:12.431576  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:12.431970  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:12.931830  161014 type.go:168] "Request Body" body=""
	I1009 19:07:12.931910  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:12.932268  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:13.431040  161014 type.go:168] "Request Body" body=""
	I1009 19:07:13.431128  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:13.431515  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:13.931313  161014 type.go:168] "Request Body" body=""
	I1009 19:07:13.931411  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:13.931829  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:13.931895  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:14.431732  161014 type.go:168] "Request Body" body=""
	I1009 19:07:14.431830  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:14.432198  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:14.931016  161014 type.go:168] "Request Body" body=""
	I1009 19:07:14.931107  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:14.931472  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:15.431233  161014 type.go:168] "Request Body" body=""
	I1009 19:07:15.431326  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:15.431754  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:15.931605  161014 type.go:168] "Request Body" body=""
	I1009 19:07:15.931691  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:15.932042  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:15.932112  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:16.430847  161014 type.go:168] "Request Body" body=""
	I1009 19:07:16.430926  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:16.431288  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:16.931059  161014 type.go:168] "Request Body" body=""
	I1009 19:07:16.931135  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:16.931483  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:17.431236  161014 type.go:168] "Request Body" body=""
	I1009 19:07:17.431328  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:17.431725  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:17.931584  161014 type.go:168] "Request Body" body=""
	I1009 19:07:17.931680  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:17.932068  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:17.932144  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:18.430878  161014 type.go:168] "Request Body" body=""
	I1009 19:07:18.430959  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:18.431336  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:18.931220  161014 type.go:168] "Request Body" body=""
	I1009 19:07:18.931308  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:18.931716  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:19.431622  161014 type.go:168] "Request Body" body=""
	I1009 19:07:19.431711  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:19.432084  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:19.930887  161014 type.go:168] "Request Body" body=""
	I1009 19:07:19.930970  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:19.931335  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:20.431128  161014 type.go:168] "Request Body" body=""
	I1009 19:07:20.431228  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:20.431607  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:20.431677  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:20.931571  161014 type.go:168] "Request Body" body=""
	I1009 19:07:20.931652  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:20.932025  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:21.431914  161014 type.go:168] "Request Body" body=""
	I1009 19:07:21.432004  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:21.432437  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:21.931260  161014 type.go:168] "Request Body" body=""
	I1009 19:07:21.931349  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:21.931776  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:22.431637  161014 type.go:168] "Request Body" body=""
	I1009 19:07:22.431729  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:22.432091  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:22.432158  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:22.930926  161014 type.go:168] "Request Body" body=""
	I1009 19:07:22.931021  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:22.931412  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:23.431182  161014 type.go:168] "Request Body" body=""
	I1009 19:07:23.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:23.431631  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:23.931458  161014 type.go:168] "Request Body" body=""
	I1009 19:07:23.931550  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:23.931920  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:24.431853  161014 type.go:168] "Request Body" body=""
	I1009 19:07:24.431948  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:24.432326  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:24.432422  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:24.931143  161014 type.go:168] "Request Body" body=""
	I1009 19:07:24.931223  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:24.931599  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:25.431358  161014 type.go:168] "Request Body" body=""
	I1009 19:07:25.431464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:25.431821  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:25.931703  161014 type.go:168] "Request Body" body=""
	I1009 19:07:25.931787  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:25.932180  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:26.430976  161014 type.go:168] "Request Body" body=""
	I1009 19:07:26.431075  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:26.431458  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:26.931245  161014 type.go:168] "Request Body" body=""
	I1009 19:07:26.931331  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:26.931713  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:26.931784  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:27.431576  161014 type.go:168] "Request Body" body=""
	I1009 19:07:27.431668  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:27.432031  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:27.931772  161014 type.go:168] "Request Body" body=""
	I1009 19:07:27.931862  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:27.932254  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:28.431022  161014 type.go:168] "Request Body" body=""
	I1009 19:07:28.431102  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:28.431482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:28.931348  161014 type.go:168] "Request Body" body=""
	I1009 19:07:28.931459  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:28.931844  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:28.931911  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:29.431781  161014 type.go:168] "Request Body" body=""
	I1009 19:07:29.431865  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:29.432226  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:29.931019  161014 type.go:168] "Request Body" body=""
	I1009 19:07:29.931099  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:29.931495  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:30.431235  161014 type.go:168] "Request Body" body=""
	I1009 19:07:30.431318  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:30.431699  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:30.931618  161014 type.go:168] "Request Body" body=""
	I1009 19:07:30.931726  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:30.932096  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:30.932155  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:31.430950  161014 type.go:168] "Request Body" body=""
	I1009 19:07:31.431039  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:31.431429  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:31.931226  161014 type.go:168] "Request Body" body=""
	I1009 19:07:31.931324  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:31.931743  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:32.431688  161014 type.go:168] "Request Body" body=""
	I1009 19:07:32.431781  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:32.432184  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:32.930987  161014 type.go:168] "Request Body" body=""
	I1009 19:07:32.931070  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:32.931473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:33.431239  161014 type.go:168] "Request Body" body=""
	I1009 19:07:33.431321  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:33.431722  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:33.431792  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:33.931516  161014 type.go:168] "Request Body" body=""
	I1009 19:07:33.931606  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:33.931963  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:34.431848  161014 type.go:168] "Request Body" body=""
	I1009 19:07:34.431929  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:34.432337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:34.931149  161014 type.go:168] "Request Body" body=""
	I1009 19:07:34.931233  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:34.931610  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:35.431433  161014 type.go:168] "Request Body" body=""
	I1009 19:07:35.431519  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:35.431884  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:35.431951  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:35.931757  161014 type.go:168] "Request Body" body=""
	I1009 19:07:35.931834  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:35.932194  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:36.431002  161014 type.go:168] "Request Body" body=""
	I1009 19:07:36.431092  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:36.431521  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:36.931304  161014 type.go:168] "Request Body" body=""
	I1009 19:07:36.931415  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:36.931771  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:37.431635  161014 type.go:168] "Request Body" body=""
	I1009 19:07:37.431735  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:37.432135  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:37.432203  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:37.931637  161014 type.go:168] "Request Body" body=""
	I1009 19:07:37.931755  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:37.932124  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:38.430922  161014 type.go:168] "Request Body" body=""
	I1009 19:07:38.431020  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:38.431405  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:38.931217  161014 type.go:168] "Request Body" body=""
	I1009 19:07:38.931295  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:38.931651  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:39.431495  161014 type.go:168] "Request Body" body=""
	I1009 19:07:39.431575  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:39.431936  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:39.931858  161014 type.go:168] "Request Body" body=""
	I1009 19:07:39.931940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:39.932326  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:39.932421  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:40.431161  161014 type.go:168] "Request Body" body=""
	I1009 19:07:40.431255  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:40.431615  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:40.931366  161014 type.go:168] "Request Body" body=""
	I1009 19:07:40.931491  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:40.931869  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:41.431767  161014 type.go:168] "Request Body" body=""
	I1009 19:07:41.431861  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:41.432350  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:41.931160  161014 type.go:168] "Request Body" body=""
	I1009 19:07:41.931250  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:41.931735  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:42.431633  161014 type.go:168] "Request Body" body=""
	I1009 19:07:42.431732  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:42.432111  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:42.432176  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:42.930929  161014 type.go:168] "Request Body" body=""
	I1009 19:07:42.931031  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:42.931442  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:43.431234  161014 type.go:168] "Request Body" body=""
	I1009 19:07:43.431336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:43.431722  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:43.931601  161014 type.go:168] "Request Body" body=""
	I1009 19:07:43.931683  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:43.932053  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:44.430865  161014 type.go:168] "Request Body" body=""
	I1009 19:07:44.430947  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:44.431356  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:44.931167  161014 type.go:168] "Request Body" body=""
	I1009 19:07:44.931250  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:44.931627  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:44.931696  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:45.431431  161014 type.go:168] "Request Body" body=""
	I1009 19:07:45.431510  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:45.431847  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:45.931770  161014 type.go:168] "Request Body" body=""
	I1009 19:07:45.931853  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:45.932210  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:46.430939  161014 type.go:168] "Request Body" body=""
	I1009 19:07:46.431018  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:46.431347  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:46.931133  161014 type.go:168] "Request Body" body=""
	I1009 19:07:46.931213  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:46.931599  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:47.431337  161014 type.go:168] "Request Body" body=""
	I1009 19:07:47.431449  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:47.431806  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:47.431876  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:47.931598  161014 type.go:168] "Request Body" body=""
	I1009 19:07:47.931682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:47.932028  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:48.431835  161014 type.go:168] "Request Body" body=""
	I1009 19:07:48.431919  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:48.432273  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:48.931089  161014 type.go:168] "Request Body" body=""
	I1009 19:07:48.931179  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:48.931527  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:49.431272  161014 type.go:168] "Request Body" body=""
	I1009 19:07:49.431350  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:49.431715  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:49.931579  161014 type.go:168] "Request Body" body=""
	I1009 19:07:49.931664  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:49.932040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:49.932107  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:50.431582  161014 type.go:168] "Request Body" body=""
	I1009 19:07:50.431662  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:50.432003  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:50.931872  161014 type.go:168] "Request Body" body=""
	I1009 19:07:50.931951  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:50.932322  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:51.431016  161014 type.go:168] "Request Body" body=""
	I1009 19:07:51.431095  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:51.431478  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:51.931270  161014 type.go:168] "Request Body" body=""
	I1009 19:07:51.931349  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:51.931734  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:52.431662  161014 type.go:168] "Request Body" body=""
	I1009 19:07:52.431743  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:52.432165  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:52.432255  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:52.931027  161014 type.go:168] "Request Body" body=""
	I1009 19:07:52.931111  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:52.931524  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:53.431299  161014 type.go:168] "Request Body" body=""
	I1009 19:07:53.431409  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:53.431777  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:53.931692  161014 type.go:168] "Request Body" body=""
	I1009 19:07:53.931802  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:53.932188  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:54.431033  161014 type.go:168] "Request Body" body=""
	I1009 19:07:54.431116  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:54.431506  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:54.931262  161014 type.go:168] "Request Body" body=""
	I1009 19:07:54.931371  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:54.931818  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:54.931896  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:55.431748  161014 type.go:168] "Request Body" body=""
	I1009 19:07:55.431839  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:55.432227  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:55.931001  161014 type.go:168] "Request Body" body=""
	I1009 19:07:55.931091  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:55.931464  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:56.431257  161014 type.go:168] "Request Body" body=""
	I1009 19:07:56.431342  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:56.431727  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:56.931602  161014 type.go:168] "Request Body" body=""
	I1009 19:07:56.931701  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:56.932081  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:56.932152  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:57.430910  161014 type.go:168] "Request Body" body=""
	I1009 19:07:57.431002  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:57.431362  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:57.931308  161014 type.go:168] "Request Body" body=""
	I1009 19:07:57.931413  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:57.931773  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:58.431643  161014 type.go:168] "Request Body" body=""
	I1009 19:07:58.431802  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:58.432134  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:58.931081  161014 type.go:168] "Request Body" body=""
	I1009 19:07:58.931169  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:58.931540  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:59.431310  161014 type.go:168] "Request Body" body=""
	I1009 19:07:59.431416  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:59.431835  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:59.431910  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:59.931738  161014 type.go:168] "Request Body" body=""
	I1009 19:07:59.931826  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:59.932198  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:00.430977  161014 type.go:168] "Request Body" body=""
	I1009 19:08:00.431073  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:00.431459  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:00.931240  161014 type.go:168] "Request Body" body=""
	I1009 19:08:00.931327  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:00.931726  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:01.431608  161014 type.go:168] "Request Body" body=""
	I1009 19:08:01.431703  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:01.432081  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:01.432155  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:01.930901  161014 type.go:168] "Request Body" body=""
	I1009 19:08:01.930998  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:01.931353  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:02.431155  161014 type.go:168] "Request Body" body=""
	I1009 19:08:02.431246  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:02.431683  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:02.931507  161014 type.go:168] "Request Body" body=""
	I1009 19:08:02.931648  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:02.932004  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:03.431604  161014 type.go:168] "Request Body" body=""
	I1009 19:08:03.431682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:03.432043  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:03.930851  161014 type.go:168] "Request Body" body=""
	I1009 19:08:03.930932  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:03.931328  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:03.931434  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:04.431148  161014 type.go:168] "Request Body" body=""
	I1009 19:08:04.431226  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:04.431671  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:04.931497  161014 type.go:168] "Request Body" body=""
	I1009 19:08:04.931576  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:04.931933  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:05.431818  161014 type.go:168] "Request Body" body=""
	I1009 19:08:05.431913  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:05.432337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:05.931111  161014 type.go:168] "Request Body" body=""
	I1009 19:08:05.931188  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:05.931598  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:05.931665  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:06.431433  161014 type.go:168] "Request Body" body=""
	I1009 19:08:06.431518  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:06.431897  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:06.931739  161014 type.go:168] "Request Body" body=""
	I1009 19:08:06.931825  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:06.932190  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:07.431010  161014 type.go:168] "Request Body" body=""
	I1009 19:08:07.431098  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:07.431492  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:07.931321  161014 type.go:168] "Request Body" body=""
	I1009 19:08:07.931478  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:07.931847  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:07.931911  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:08.431736  161014 type.go:168] "Request Body" body=""
	I1009 19:08:08.431826  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:08.432199  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:08.931147  161014 type.go:168] "Request Body" body=""
	I1009 19:08:08.931256  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:08.931581  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:09.431348  161014 type.go:168] "Request Body" body=""
	I1009 19:08:09.431501  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:09.431847  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:09.931761  161014 type.go:168] "Request Body" body=""
	I1009 19:08:09.931868  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:09.932264  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:09.932358  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:10.431111  161014 type.go:168] "Request Body" body=""
	I1009 19:08:10.431226  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:10.431600  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:10.931402  161014 type.go:168] "Request Body" body=""
	I1009 19:08:10.931502  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:10.931871  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:11.431784  161014 type.go:168] "Request Body" body=""
	I1009 19:08:11.431872  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:11.432233  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:11.931048  161014 type.go:168] "Request Body" body=""
	I1009 19:08:11.931144  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:11.931576  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:12.431421  161014 type.go:168] "Request Body" body=""
	I1009 19:08:12.431503  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:12.431862  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:12.431928  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:12.931757  161014 type.go:168] "Request Body" body=""
	I1009 19:08:12.931854  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:12.932305  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:13.431097  161014 type.go:168] "Request Body" body=""
	I1009 19:08:13.431185  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:13.431628  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:13.931448  161014 type.go:168] "Request Body" body=""
	I1009 19:08:13.931544  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:13.931895  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:14.431813  161014 type.go:168] "Request Body" body=""
	I1009 19:08:14.431896  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:14.432325  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:14.432452  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:14.931193  161014 type.go:168] "Request Body" body=""
	I1009 19:08:14.931304  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:14.931724  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:15.431610  161014 type.go:168] "Request Body" body=""
	I1009 19:08:15.431784  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:15.432189  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:15.930996  161014 type.go:168] "Request Body" body=""
	I1009 19:08:15.931076  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:15.931476  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:16.431279  161014 type.go:168] "Request Body" body=""
	I1009 19:08:16.431364  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:16.431823  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:16.931708  161014 type.go:168] "Request Body" body=""
	I1009 19:08:16.931791  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:16.932165  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:16.932241  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:17.430990  161014 type.go:168] "Request Body" body=""
	I1009 19:08:17.431074  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:17.431506  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:17.931431  161014 type.go:168] "Request Body" body=""
	I1009 19:08:17.931525  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:17.931892  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:18.431806  161014 type.go:168] "Request Body" body=""
	I1009 19:08:18.431897  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:18.432299  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:18.931120  161014 type.go:168] "Request Body" body=""
	I1009 19:08:18.931214  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:18.931614  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:19.431514  161014 type.go:168] "Request Body" body=""
	I1009 19:08:19.431606  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:19.432047  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:19.432124  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:19.931598  161014 type.go:168] "Request Body" body=""
	I1009 19:08:19.931691  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:19.932042  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:20.431891  161014 type.go:168] "Request Body" body=""
	I1009 19:08:20.431971  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:20.432405  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:20.931180  161014 type.go:168] "Request Body" body=""
	I1009 19:08:20.931263  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:20.931621  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:21.431543  161014 type.go:168] "Request Body" body=""
	I1009 19:08:21.431622  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:21.432021  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:21.931880  161014 type.go:168] "Request Body" body=""
	I1009 19:08:21.931973  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:21.932344  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:21.932455  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:22.431220  161014 type.go:168] "Request Body" body=""
	I1009 19:08:22.431312  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:22.431735  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:22.931611  161014 type.go:168] "Request Body" body=""
	I1009 19:08:22.931692  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:22.932047  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:23.430844  161014 type.go:168] "Request Body" body=""
	I1009 19:08:23.430928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:23.431339  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:23.931177  161014 type.go:168] "Request Body" body=""
	I1009 19:08:23.931280  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:23.931703  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:24.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:08:24.431623  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:24.432029  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:24.432099  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:24.930846  161014 type.go:168] "Request Body" body=""
	I1009 19:08:24.930940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:24.931301  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:25.431093  161014 type.go:168] "Request Body" body=""
	I1009 19:08:25.431180  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:25.431586  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:25.931364  161014 type.go:168] "Request Body" body=""
	I1009 19:08:25.931490  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:25.931848  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:26.431757  161014 type.go:168] "Request Body" body=""
	I1009 19:08:26.431844  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:26.432286  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:26.432356  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:26.931111  161014 type.go:168] "Request Body" body=""
	I1009 19:08:26.931219  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:26.931654  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:27.431562  161014 type.go:168] "Request Body" body=""
	I1009 19:08:27.431657  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:27.432104  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:27.931917  161014 type.go:168] "Request Body" body=""
	I1009 19:08:27.932031  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:27.932479  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:28.431253  161014 type.go:168] "Request Body" body=""
	I1009 19:08:28.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:28.431741  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:28.931701  161014 type.go:168] "Request Body" body=""
	I1009 19:08:28.931793  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:28.932147  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:28.932231  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:29.430994  161014 type.go:168] "Request Body" body=""
	I1009 19:08:29.431076  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:29.431507  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:29.931284  161014 type.go:168] "Request Body" body=""
	I1009 19:08:29.931372  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:29.931786  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:30.431725  161014 type.go:168] "Request Body" body=""
	I1009 19:08:30.431807  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:30.432196  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:30.930995  161014 type.go:168] "Request Body" body=""
	I1009 19:08:30.931086  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:30.931489  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:31.431293  161014 type.go:168] "Request Body" body=""
	I1009 19:08:31.431407  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:31.431802  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:31.431899  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:31.931763  161014 type.go:168] "Request Body" body=""
	I1009 19:08:31.931847  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:31.932233  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:32.431064  161014 type.go:168] "Request Body" body=""
	I1009 19:08:32.431143  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:32.431569  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:32.931367  161014 type.go:168] "Request Body" body=""
	I1009 19:08:32.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:32.931834  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:33.431666  161014 type.go:168] "Request Body" body=""
	I1009 19:08:33.431746  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:33.432152  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:33.432228  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:33.931085  161014 type.go:168] "Request Body" body=""
	I1009 19:08:33.931187  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:33.931603  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:34.431399  161014 type.go:168] "Request Body" body=""
	I1009 19:08:34.431485  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:34.431891  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:34.931782  161014 type.go:168] "Request Body" body=""
	I1009 19:08:34.931877  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:34.932244  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:35.431033  161014 type.go:168] "Request Body" body=""
	I1009 19:08:35.431120  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:35.431472  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:35.931247  161014 type.go:168] "Request Body" body=""
	I1009 19:08:35.931336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:35.931759  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:35.931829  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:36.431682  161014 type.go:168] "Request Body" body=""
	I1009 19:08:36.431785  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:36.432193  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:36.931013  161014 type.go:168] "Request Body" body=""
	I1009 19:08:36.931099  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:36.931470  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:37.431265  161014 type.go:168] "Request Body" body=""
	I1009 19:08:37.431370  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:37.431819  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:37.931612  161014 type.go:168] "Request Body" body=""
	I1009 19:08:37.931700  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:37.932079  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:37.932145  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:38.430913  161014 type.go:168] "Request Body" body=""
	I1009 19:08:38.431022  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:38.431519  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:38.931233  161014 type.go:168] "Request Body" body=""
	I1009 19:08:38.931319  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:38.931686  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:39.431521  161014 type.go:168] "Request Body" body=""
	I1009 19:08:39.431627  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:39.432049  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:39.931904  161014 type.go:168] "Request Body" body=""
	I1009 19:08:39.932008  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:39.932353  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:39.932451  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:40.431183  161014 type.go:168] "Request Body" body=""
	I1009 19:08:40.431282  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:40.431716  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:40.931624  161014 type.go:168] "Request Body" body=""
	I1009 19:08:40.931713  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:40.932079  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:41.430889  161014 type.go:168] "Request Body" body=""
	I1009 19:08:41.430987  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:41.431423  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:41.931240  161014 type.go:168] "Request Body" body=""
	I1009 19:08:41.931324  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:41.931700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:42.431534  161014 type.go:168] "Request Body" body=""
	I1009 19:08:42.431639  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:42.432064  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:42.432142  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:42.930885  161014 type.go:168] "Request Body" body=""
	I1009 19:08:42.930975  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:42.931354  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:43.431227  161014 type.go:168] "Request Body" body=""
	I1009 19:08:43.431323  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:43.431715  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:43.931552  161014 type.go:168] "Request Body" body=""
	I1009 19:08:43.931632  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:43.931992  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:44.431828  161014 type.go:168] "Request Body" body=""
	I1009 19:08:44.431924  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:44.432325  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:44.432415  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:44.931136  161014 type.go:168] "Request Body" body=""
	I1009 19:08:44.931245  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:44.931664  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:45.431554  161014 type.go:168] "Request Body" body=""
	I1009 19:08:45.431649  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:45.432042  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:45.931929  161014 type.go:168] "Request Body" body=""
	I1009 19:08:45.932032  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:45.932456  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:46.431215  161014 type.go:168] "Request Body" body=""
	I1009 19:08:46.431303  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:46.431675  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:46.931516  161014 type.go:168] "Request Body" body=""
	I1009 19:08:46.931612  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:46.932033  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:46.932105  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:47.431930  161014 type.go:168] "Request Body" body=""
	I1009 19:08:47.432024  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:47.432404  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:47.931253  161014 type.go:168] "Request Body" body=""
	I1009 19:08:47.931351  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:47.931772  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:48.431679  161014 type.go:168] "Request Body" body=""
	I1009 19:08:48.431767  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:48.432147  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:48.930986  161014 type.go:168] "Request Body" body=""
	I1009 19:08:48.931073  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:48.931466  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:49.431246  161014 type.go:168] "Request Body" body=""
	I1009 19:08:49.431332  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:49.431709  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:49.431791  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:49.931583  161014 type.go:168] "Request Body" body=""
	I1009 19:08:49.931665  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:49.932043  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:50.430854  161014 type.go:168] "Request Body" body=""
	I1009 19:08:50.430942  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:50.431310  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:50.931059  161014 type.go:168] "Request Body" body=""
	I1009 19:08:50.931138  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:50.931534  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:51.431317  161014 type.go:168] "Request Body" body=""
	I1009 19:08:51.431423  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:51.431783  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:51.431860  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:51.931678  161014 type.go:168] "Request Body" body=""
	I1009 19:08:51.931770  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:51.932161  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:52.430940  161014 type.go:168] "Request Body" body=""
	I1009 19:08:52.431043  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:52.431471  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:52.931234  161014 type.go:168] "Request Body" body=""
	I1009 19:08:52.931317  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:52.931697  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:53.431539  161014 type.go:168] "Request Body" body=""
	I1009 19:08:53.431626  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:53.432040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:53.432105  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:53.931898  161014 type.go:168] "Request Body" body=""
	I1009 19:08:53.931980  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:53.932334  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:54.431123  161014 type.go:168] "Request Body" body=""
	I1009 19:08:54.431206  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:54.431572  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:54.931007  161014 type.go:168] "Request Body" body=""
	I1009 19:08:54.931094  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:54.931473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:55.431255  161014 type.go:168] "Request Body" body=""
	I1009 19:08:55.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:55.431719  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:55.931595  161014 type.go:168] "Request Body" body=""
	I1009 19:08:55.931684  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:55.932059  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:55.932132  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:56.430905  161014 type.go:168] "Request Body" body=""
	I1009 19:08:56.430996  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:56.431358  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:56.931139  161014 type.go:168] "Request Body" body=""
	I1009 19:08:56.931225  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:56.931614  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:57.431422  161014 type.go:168] "Request Body" body=""
	I1009 19:08:57.431520  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:57.431890  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:57.931717  161014 type.go:168] "Request Body" body=""
	I1009 19:08:57.931804  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:57.932243  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:57.932309  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:58.431442  161014 type.go:168] "Request Body" body=""
	I1009 19:08:58.431719  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:58.432305  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:58.931643  161014 type.go:168] "Request Body" body=""
	I1009 19:08:58.931736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:58.932089  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:59.431793  161014 type.go:168] "Request Body" body=""
	I1009 19:08:59.431868  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:59.432216  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:59.931889  161014 type.go:168] "Request Body" body=""
	I1009 19:08:59.931982  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:59.932322  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:59.932430  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:00.430938  161014 type.go:168] "Request Body" body=""
	I1009 19:09:00.431025  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:00.431413  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:00.930953  161014 type.go:168] "Request Body" body=""
	I1009 19:09:00.931042  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:00.931443  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:01.431021  161014 type.go:168] "Request Body" body=""
	I1009 19:09:01.431114  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:01.431513  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:01.931074  161014 type.go:168] "Request Body" body=""
	I1009 19:09:01.931154  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:01.931545  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:02.431340  161014 type.go:168] "Request Body" body=""
	I1009 19:09:02.431449  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:02.431830  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:02.431902  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:02.931823  161014 type.go:168] "Request Body" body=""
	I1009 19:09:02.931913  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:02.932314  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:03.431114  161014 type.go:168] "Request Body" body=""
	I1009 19:09:03.431193  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:03.431578  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:03.931464  161014 type.go:168] "Request Body" body=""
	I1009 19:09:03.931552  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:03.931986  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:04.431831  161014 type.go:168] "Request Body" body=""
	I1009 19:09:04.431934  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:04.432314  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:04.432398  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:04.931129  161014 type.go:168] "Request Body" body=""
	I1009 19:09:04.931216  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:04.931674  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:05.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:09:05.431611  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:05.432021  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:05.931854  161014 type.go:168] "Request Body" body=""
	I1009 19:09:05.931940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:05.932334  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:06.431088  161014 type.go:168] "Request Body" body=""
	I1009 19:09:06.431167  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:06.431515  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:06.931278  161014 type.go:168] "Request Body" body=""
	I1009 19:09:06.931362  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:06.931748  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:06.931816  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:07.431644  161014 type.go:168] "Request Body" body=""
	I1009 19:09:07.431747  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:07.432178  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:07.931866  161014 type.go:168] "Request Body" body=""
	I1009 19:09:07.931950  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:07.932290  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:08.431090  161014 type.go:168] "Request Body" body=""
	I1009 19:09:08.431172  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:08.431650  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:08.931429  161014 type.go:168] "Request Body" body=""
	I1009 19:09:08.931507  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:08.931843  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:08.931909  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:09.431805  161014 type.go:168] "Request Body" body=""
	I1009 19:09:09.431897  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:09.432328  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:09.931113  161014 type.go:168] "Request Body" body=""
	I1009 19:09:09.931194  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:09.931569  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:10.431340  161014 type.go:168] "Request Body" body=""
	I1009 19:09:10.431473  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:10.431864  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:10.931696  161014 type.go:168] "Request Body" body=""
	I1009 19:09:10.931778  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:10.932062  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:10.932116  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:11.430855  161014 type.go:168] "Request Body" body=""
	I1009 19:09:11.430938  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:11.431371  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:11.931153  161014 type.go:168] "Request Body" body=""
	I1009 19:09:11.931230  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:11.931601  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:12.431453  161014 type.go:168] "Request Body" body=""
	I1009 19:09:12.431539  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:12.431968  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:12.931803  161014 type.go:168] "Request Body" body=""
	I1009 19:09:12.931890  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:12.932230  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:12.932299  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:13.431049  161014 type.go:168] "Request Body" body=""
	I1009 19:09:13.431141  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:13.431581  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:13.931422  161014 type.go:168] "Request Body" body=""
	I1009 19:09:13.931504  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:13.931849  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:14.431710  161014 type.go:168] "Request Body" body=""
	I1009 19:09:14.431803  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:14.432200  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:14.930978  161014 type.go:168] "Request Body" body=""
	I1009 19:09:14.931058  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:14.931421  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:15.431205  161014 type.go:168] "Request Body" body=""
	I1009 19:09:15.431290  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:15.431792  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:15.431868  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:15.931738  161014 type.go:168] "Request Body" body=""
	I1009 19:09:15.931822  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:15.932171  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:16.430949  161014 type.go:168] "Request Body" body=""
	I1009 19:09:16.431033  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:16.431370  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:16.931168  161014 type.go:168] "Request Body" body=""
	I1009 19:09:16.931244  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:16.931635  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:17.431446  161014 type.go:168] "Request Body" body=""
	I1009 19:09:17.431534  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:17.431909  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:17.431982  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:17.931495  161014 type.go:168] "Request Body" body=""
	I1009 19:09:17.931580  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:17.931927  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:18.431744  161014 type.go:168] "Request Body" body=""
	I1009 19:09:18.431828  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:18.432200  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:18.931151  161014 type.go:168] "Request Body" body=""
	I1009 19:09:18.931250  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:18.931652  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:19.431441  161014 type.go:168] "Request Body" body=""
	I1009 19:09:19.431529  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:19.431984  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:19.432070  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:19.931848  161014 type.go:168] "Request Body" body=""
	I1009 19:09:19.931941  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:19.932309  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:20.431088  161014 type.go:168] "Request Body" body=""
	I1009 19:09:20.431164  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:20.431555  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:20.931352  161014 type.go:168] "Request Body" body=""
	I1009 19:09:20.931455  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:20.931826  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:21.431728  161014 type.go:168] "Request Body" body=""
	I1009 19:09:21.431814  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:21.432175  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:21.432242  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:21.930958  161014 type.go:168] "Request Body" body=""
	I1009 19:09:21.931034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:21.931435  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:22.431185  161014 type.go:168] "Request Body" body=""
	I1009 19:09:22.431270  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:22.431704  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:22.931192  161014 type.go:168] "Request Body" body=""
	I1009 19:09:22.931273  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:22.931689  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:23.431502  161014 type.go:168] "Request Body" body=""
	I1009 19:09:23.431580  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:23.431996  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:23.930860  161014 type.go:168] "Request Body" body=""
	I1009 19:09:23.930955  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:23.931370  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:23.931478  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:24.431207  161014 type.go:168] "Request Body" body=""
	I1009 19:09:24.431286  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:24.431650  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:24.931519  161014 type.go:168] "Request Body" body=""
	I1009 19:09:24.931614  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:24.931998  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:25.431913  161014 type.go:168] "Request Body" body=""
	I1009 19:09:25.432001  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:25.432369  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:25.931194  161014 type.go:168] "Request Body" body=""
	I1009 19:09:25.931275  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:25.931711  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:25.931786  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:26.431609  161014 type.go:168] "Request Body" body=""
	I1009 19:09:26.431690  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:26.432050  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:26.931918  161014 type.go:168] "Request Body" body=""
	I1009 19:09:26.932020  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:26.932417  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:27.431187  161014 type.go:168] "Request Body" body=""
	I1009 19:09:27.431268  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:27.431666  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:27.931530  161014 type.go:168] "Request Body" body=""
	I1009 19:09:27.931614  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:27.931987  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:27.932055  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:28.431844  161014 type.go:168] "Request Body" body=""
	I1009 19:09:28.431933  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:28.432359  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:28.931165  161014 type.go:168] "Request Body" body=""
	I1009 19:09:28.931247  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:28.931680  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:29.431569  161014 type.go:168] "Request Body" body=""
	I1009 19:09:29.431650  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:29.432040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:29.931942  161014 type.go:168] "Request Body" body=""
	I1009 19:09:29.932027  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:29.932374  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:29.932460  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:30.431194  161014 type.go:168] "Request Body" body=""
	I1009 19:09:30.431282  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:30.431737  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:30.931616  161014 type.go:168] "Request Body" body=""
	I1009 19:09:30.931725  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:30.932121  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:31.430987  161014 type.go:168] "Request Body" body=""
	I1009 19:09:31.431078  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:31.431478  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:31.931232  161014 type.go:168] "Request Body" body=""
	I1009 19:09:31.931315  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:31.931680  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:32.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:09:32.431613  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:32.431992  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:32.432063  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:32.931853  161014 type.go:168] "Request Body" body=""
	I1009 19:09:32.931950  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:32.932297  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:33.431044  161014 type.go:168] "Request Body" body=""
	I1009 19:09:33.431132  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:33.431543  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:33.931355  161014 type.go:168] "Request Body" body=""
	I1009 19:09:33.931458  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:33.931818  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:34.431650  161014 type.go:168] "Request Body" body=""
	I1009 19:09:34.431733  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:34.432148  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:34.432213  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:34.930967  161014 type.go:168] "Request Body" body=""
	I1009 19:09:34.931063  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:34.931473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:35.431283  161014 type.go:168] "Request Body" body=""
	I1009 19:09:35.431373  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:35.431779  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:35.931628  161014 type.go:168] "Request Body" body=""
	I1009 19:09:35.931736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:35.932084  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:36.430910  161014 type.go:168] "Request Body" body=""
	I1009 19:09:36.431012  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:36.431444  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:36.931340  161014 type.go:168] "Request Body" body=""
	I1009 19:09:36.931451  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:36.931825  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:36.931893  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:37.431740  161014 type.go:168] "Request Body" body=""
	I1009 19:09:37.431822  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:37.432174  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:37.931117  161014 type.go:168] "Request Body" body=""
	I1009 19:09:37.931218  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:37.931587  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:38.431359  161014 type.go:168] "Request Body" body=""
	I1009 19:09:38.431467  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:38.431870  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:38.931821  161014 type.go:168] "Request Body" body=""
	I1009 19:09:38.931902  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:38.932265  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:38.932358  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:39.431099  161014 type.go:168] "Request Body" body=""
	I1009 19:09:39.431179  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:39.431570  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:39.931428  161014 type.go:168] "Request Body" body=""
	I1009 19:09:39.931517  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:39.931883  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:40.431747  161014 type.go:168] "Request Body" body=""
	I1009 19:09:40.431830  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:40.432201  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:40.931026  161014 type.go:168] "Request Body" body=""
	I1009 19:09:40.931135  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:40.931594  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:41.431370  161014 type.go:168] "Request Body" body=""
	I1009 19:09:41.431476  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:41.431863  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:41.431951  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:41.931795  161014 type.go:168] "Request Body" body=""
	I1009 19:09:41.931873  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:41.932227  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:42.431026  161014 type.go:168] "Request Body" body=""
	I1009 19:09:42.431112  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:42.431474  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:42.931233  161014 type.go:168] "Request Body" body=""
	I1009 19:09:42.931308  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:42.931720  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:43.431625  161014 type.go:168] "Request Body" body=""
	I1009 19:09:43.431708  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:43.432076  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:43.432152  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:43.930876  161014 type.go:168] "Request Body" body=""
	I1009 19:09:43.930965  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:43.931363  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:44.431159  161014 type.go:168] "Request Body" body=""
	I1009 19:09:44.431252  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:44.431660  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:44.931539  161014 type.go:168] "Request Body" body=""
	I1009 19:09:44.931619  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:44.932022  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:45.431848  161014 type.go:168] "Request Body" body=""
	I1009 19:09:45.431928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:45.432294  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:45.432362  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:45.931071  161014 type.go:168] "Request Body" body=""
	I1009 19:09:45.931154  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:45.931550  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:46.431330  161014 type.go:168] "Request Body" body=""
	I1009 19:09:46.431433  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:46.431785  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:46.931618  161014 type.go:168] "Request Body" body=""
	I1009 19:09:46.931717  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:46.932083  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:47.430878  161014 type.go:168] "Request Body" body=""
	I1009 19:09:47.430967  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:47.431308  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:47.931113  161014 type.go:168] "Request Body" body=""
	I1009 19:09:47.931193  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:47.931575  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:47.931645  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:48.431350  161014 type.go:168] "Request Body" body=""
	I1009 19:09:48.431448  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:48.431909  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:48.931846  161014 type.go:168] "Request Body" body=""
	I1009 19:09:48.931928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:48.932292  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:49.431050  161014 type.go:168] "Request Body" body=""
	I1009 19:09:49.431125  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:49.431508  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:49.931265  161014 type.go:168] "Request Body" body=""
	I1009 19:09:49.931345  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:49.931748  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:49.931814  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:50.431652  161014 type.go:168] "Request Body" body=""
	I1009 19:09:50.431747  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:50.432126  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:50.930878  161014 type.go:168] "Request Body" body=""
	I1009 19:09:50.930959  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:50.931336  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:51.431163  161014 type.go:168] "Request Body" body=""
	I1009 19:09:51.431258  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:51.431622  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:51.931358  161014 type.go:168] "Request Body" body=""
	I1009 19:09:51.931464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:51.931852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:51.931924  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:52.431703  161014 type.go:168] "Request Body" body=""
	I1009 19:09:52.431795  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:52.432179  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:52.930954  161014 type.go:168] "Request Body" body=""
	I1009 19:09:52.931050  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:52.931459  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:53.431224  161014 type.go:168] "Request Body" body=""
	I1009 19:09:53.431365  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:53.431740  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:53.931748  161014 type.go:168] "Request Body" body=""
	I1009 19:09:53.931831  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:53.932191  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:53.932260  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:54.430975  161014 type.go:168] "Request Body" body=""
	I1009 19:09:54.431053  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:54.431476  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:54.931249  161014 type.go:168] "Request Body" body=""
	I1009 19:09:54.931341  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:54.931729  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:55.431607  161014 type.go:168] "Request Body" body=""
	I1009 19:09:55.431691  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:55.432110  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:55.930917  161014 type.go:168] "Request Body" body=""
	I1009 19:09:55.931003  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:55.931362  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:56.431145  161014 type.go:168] "Request Body" body=""
	I1009 19:09:56.431222  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:56.431639  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:56.431710  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:56.931556  161014 type.go:168] "Request Body" body=""
	I1009 19:09:56.931656  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:56.932062  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:57.431903  161014 type.go:168] "Request Body" body=""
	I1009 19:09:57.431989  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:57.432337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:57.931402  161014 type.go:168] "Request Body" body=""
	I1009 19:09:57.931482  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:57.931843  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:58.431701  161014 type.go:168] "Request Body" body=""
	I1009 19:09:58.431790  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:58.432145  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:58.432218  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:58.931088  161014 type.go:168] "Request Body" body=""
	I1009 19:09:58.931175  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:58.931505  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:59.431298  161014 type.go:168] "Request Body" body=""
	I1009 19:09:59.431395  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:59.431751  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:59.931602  161014 type.go:168] "Request Body" body=""
	I1009 19:09:59.931702  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:59.932051  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:00.430856  161014 type.go:168] "Request Body" body=""
	I1009 19:10:00.430958  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:00.431337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:00.931121  161014 type.go:168] "Request Body" body=""
	I1009 19:10:00.931220  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:00.931593  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:00.931674  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:01.431423  161014 type.go:168] "Request Body" body=""
	I1009 19:10:01.431509  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:01.431863  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:01.931614  161014 type.go:168] "Request Body" body=""
	I1009 19:10:01.931705  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:01.932079  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:02.430870  161014 type.go:168] "Request Body" body=""
	I1009 19:10:02.430952  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:02.431333  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:02.931135  161014 type.go:168] "Request Body" body=""
	I1009 19:10:02.931235  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:02.931629  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:02.931714  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:03.431520  161014 type.go:168] "Request Body" body=""
	I1009 19:10:03.431673  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:03.432032  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:03.930864  161014 type.go:168] "Request Body" body=""
	I1009 19:10:03.930947  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:03.931344  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:04.431204  161014 type.go:168] "Request Body" body=""
	I1009 19:10:04.431296  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:04.431704  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:04.931600  161014 type.go:168] "Request Body" body=""
	I1009 19:10:04.931678  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:04.932040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:04.932106  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:05.430899  161014 type.go:168] "Request Body" body=""
	I1009 19:10:05.431003  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:05.431407  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:05.931195  161014 type.go:168] "Request Body" body=""
	I1009 19:10:05.931270  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:05.931635  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:06.431451  161014 type.go:168] "Request Body" body=""
	I1009 19:10:06.431534  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:06.431953  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:06.931837  161014 type.go:168] "Request Body" body=""
	I1009 19:10:06.931927  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:06.932279  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:06.932345  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:07.431076  161014 type.go:168] "Request Body" body=""
	I1009 19:10:07.431164  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:07.431506  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:07.931394  161014 type.go:168] "Request Body" body=""
	I1009 19:10:07.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:07.931835  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:08.431660  161014 type.go:168] "Request Body" body=""
	I1009 19:10:08.431741  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:08.432102  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:08.930920  161014 type.go:168] "Request Body" body=""
	I1009 19:10:08.930998  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:08.931426  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:09.431179  161014 type.go:168] "Request Body" body=""
	I1009 19:10:09.431260  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:09.431640  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:09.431713  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:09.931551  161014 type.go:168] "Request Body" body=""
	I1009 19:10:09.931636  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:09.932085  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:10.430911  161014 type.go:168] "Request Body" body=""
	I1009 19:10:10.431004  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:10.431408  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:10.931180  161014 type.go:168] "Request Body" body=""
	I1009 19:10:10.931260  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:10.931651  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:11.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:10:11.431610  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:11.432017  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:11.432093  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:11.930846  161014 type.go:168] "Request Body" body=""
	I1009 19:10:11.930928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:11.931300  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:12.431099  161014 type.go:168] "Request Body" body=""
	I1009 19:10:12.431188  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:12.431615  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:12.931577  161014 type.go:168] "Request Body" body=""
	I1009 19:10:12.931661  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:12.932029  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:13.431910  161014 type.go:168] "Request Body" body=""
	I1009 19:10:13.431995  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:13.432350  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:13.432438  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:13.931217  161014 type.go:168] "Request Body" body=""
	I1009 19:10:13.931302  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:13.931678  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:14.431548  161014 type.go:168] "Request Body" body=""
	I1009 19:10:14.431638  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:14.432110  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:14.930876  161014 type.go:168] "Request Body" body=""
	I1009 19:10:14.930963  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:14.931343  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:15.431156  161014 type.go:168] "Request Body" body=""
	I1009 19:10:15.431244  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:15.431618  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:15.931358  161014 type.go:168] "Request Body" body=""
	I1009 19:10:15.931451  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:15.931817  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:15.931883  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:16.431696  161014 type.go:168] "Request Body" body=""
	I1009 19:10:16.431794  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:16.432158  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:16.930930  161014 type.go:168] "Request Body" body=""
	I1009 19:10:16.931010  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:16.931370  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:17.431200  161014 type.go:168] "Request Body" body=""
	I1009 19:10:17.431290  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:17.431663  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:17.931525  161014 type.go:168] "Request Body" body=""
	I1009 19:10:17.931613  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:17.932012  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:17.932077  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:18.431980  161014 type.go:168] "Request Body" body=""
	I1009 19:10:18.432065  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:18.432498  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:18.931327  161014 type.go:168] "Request Body" body=""
	I1009 19:10:18.931435  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:18.931798  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:19.431649  161014 type.go:168] "Request Body" body=""
	I1009 19:10:19.431736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:19.432133  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:19.930941  161014 type.go:168] "Request Body" body=""
	I1009 19:10:19.931034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:19.931426  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:20.431191  161014 type.go:168] "Request Body" body=""
	I1009 19:10:20.431277  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:20.431702  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:20.431786  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:20.931649  161014 type.go:168] "Request Body" body=""
	I1009 19:10:20.931743  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:20.932145  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:21.430998  161014 type.go:168] "Request Body" body=""
	I1009 19:10:21.431093  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:21.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:21.931294  161014 type.go:168] "Request Body" body=""
	I1009 19:10:21.931404  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:21.931769  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:22.431592  161014 type.go:168] "Request Body" body=""
	I1009 19:10:22.431689  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:22.432061  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:22.432138  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:22.930890  161014 type.go:168] "Request Body" body=""
	I1009 19:10:22.930981  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:22.931355  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:23.431123  161014 type.go:168] "Request Body" body=""
	I1009 19:10:23.431202  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:23.431562  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:23.931393  161014 type.go:168] "Request Body" body=""
	I1009 19:10:23.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:23.931849  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:24.431681  161014 type.go:168] "Request Body" body=""
	I1009 19:10:24.431765  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:24.432120  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:24.432200  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:24.930948  161014 type.go:168] "Request Body" body=""
	I1009 19:10:24.931038  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:24.931411  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:25.431172  161014 type.go:168] "Request Body" body=""
	I1009 19:10:25.431263  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:25.431645  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:25.931517  161014 type.go:168] "Request Body" body=""
	I1009 19:10:25.931604  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:25.931950  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:26.431795  161014 type.go:168] "Request Body" body=""
	I1009 19:10:26.431877  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:26.432259  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:26.432327  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:26.931108  161014 type.go:168] "Request Body" body=""
	I1009 19:10:26.931192  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:26.931561  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:27.431372  161014 type.go:168] "Request Body" body=""
	I1009 19:10:27.431478  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:27.431852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:27.931767  161014 type.go:168] "Request Body" body=""
	I1009 19:10:27.931844  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:27.932243  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:28.431036  161014 type.go:168] "Request Body" body=""
	I1009 19:10:28.431114  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:28.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:28.931317  161014 type.go:168] "Request Body" body=""
	I1009 19:10:28.931415  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:28.931802  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:28.931870  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:29.431682  161014 type.go:168] "Request Body" body=""
	I1009 19:10:29.431764  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:29.432158  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:29.930948  161014 type.go:168] "Request Body" body=""
	I1009 19:10:29.931029  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:29.931432  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:30.431237  161014 type.go:168] "Request Body" body=""
	I1009 19:10:30.431318  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:30.431700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:30.931592  161014 type.go:168] "Request Body" body=""
	I1009 19:10:30.931686  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:30.932066  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:30.932138  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:31.430865  161014 type.go:168] "Request Body" body=""
	I1009 19:10:31.430944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:31.431326  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:31.931100  161014 type.go:168] "Request Body" body=""
	I1009 19:10:31.931183  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:31.931557  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:32.431408  161014 type.go:168] "Request Body" body=""
	I1009 19:10:32.431492  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:32.431860  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:32.931727  161014 type.go:168] "Request Body" body=""
	I1009 19:10:32.931827  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:32.932201  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:32.932275  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:33.431035  161014 type.go:168] "Request Body" body=""
	I1009 19:10:33.431127  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:33.431509  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:33.931347  161014 type.go:168] "Request Body" body=""
	I1009 19:10:33.931452  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:33.931805  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:34.431659  161014 type.go:168] "Request Body" body=""
	I1009 19:10:34.431767  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:34.432157  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:34.930935  161014 type.go:168] "Request Body" body=""
	I1009 19:10:34.931032  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:34.931422  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:35.431188  161014 type.go:168] "Request Body" body=""
	I1009 19:10:35.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:35.431638  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:35.431700  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:35.931496  161014 type.go:168] "Request Body" body=""
	I1009 19:10:35.931583  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:35.931982  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:36.431843  161014 type.go:168] "Request Body" body=""
	I1009 19:10:36.431930  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:36.432287  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:36.931012  161014 type.go:168] "Request Body" body=""
	I1009 19:10:36.931101  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:36.931479  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:37.431254  161014 type.go:168] "Request Body" body=""
	I1009 19:10:37.431336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:37.431708  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:37.431785  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:37.931498  161014 type.go:168] "Request Body" body=""
	I1009 19:10:37.931578  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:37.931952  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:38.431802  161014 type.go:168] "Request Body" body=""
	I1009 19:10:38.431878  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:38.432242  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:38.931094  161014 type.go:168] "Request Body" body=""
	I1009 19:10:38.931171  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:38.931535  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:39.431342  161014 type.go:168] "Request Body" body=""
	I1009 19:10:39.431467  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:39.431828  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:39.431894  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:39.931678  161014 type.go:168] "Request Body" body=""
	I1009 19:10:39.931769  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:39.932114  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:40.430894  161014 type.go:168] "Request Body" body=""
	I1009 19:10:40.431002  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:40.431338  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:40.931086  161014 type.go:168] "Request Body" body=""
	I1009 19:10:40.931169  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:40.931549  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:41.431354  161014 type.go:168] "Request Body" body=""
	I1009 19:10:41.431484  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:41.431936  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:41.432009  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:41.931856  161014 type.go:168] "Request Body" body=""
	I1009 19:10:41.931944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:41.932342  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:42.431239  161014 type.go:168] "Request Body" body=""
	I1009 19:10:42.431343  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:42.431776  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:42.931642  161014 type.go:168] "Request Body" body=""
	I1009 19:10:42.931724  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:42.932139  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:43.430955  161014 type.go:168] "Request Body" body=""
	I1009 19:10:43.431055  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:43.431482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:43.931286  161014 type.go:168] "Request Body" body=""
	I1009 19:10:43.931364  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:43.931761  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:43.931841  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:44.431651  161014 type.go:168] "Request Body" body=""
	I1009 19:10:44.431739  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:44.432136  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:44.930918  161014 type.go:168] "Request Body" body=""
	I1009 19:10:44.930997  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:44.931368  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:45.431210  161014 type.go:168] "Request Body" body=""
	I1009 19:10:45.431301  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:45.431803  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:45.931785  161014 type.go:168] "Request Body" body=""
	I1009 19:10:45.931879  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:45.932234  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:45.932298  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:46.431044  161014 type.go:168] "Request Body" body=""
	I1009 19:10:46.431130  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:46.431509  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:46.931298  161014 type.go:168] "Request Body" body=""
	I1009 19:10:46.931409  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:46.931768  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:47.431684  161014 type.go:168] "Request Body" body=""
	I1009 19:10:47.431772  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:47.432192  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:47.930892  161014 type.go:168] "Request Body" body=""
	I1009 19:10:47.931082  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:47.931491  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:48.431254  161014 type.go:168] "Request Body" body=""
	I1009 19:10:48.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:48.431754  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:48.431817  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:48.931519  161014 type.go:168] "Request Body" body=""
	I1009 19:10:48.931605  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:48.931963  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:49.431903  161014 type.go:168] "Request Body" body=""
	I1009 19:10:49.431995  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:49.432442  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:49.931216  161014 type.go:168] "Request Body" body=""
	I1009 19:10:49.931299  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:49.931685  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:50.431513  161014 type.go:168] "Request Body" body=""
	I1009 19:10:50.431600  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:50.432015  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:50.432094  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:50.931903  161014 type.go:168] "Request Body" body=""
	I1009 19:10:50.931985  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:50.932356  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:51.431151  161014 type.go:168] "Request Body" body=""
	I1009 19:10:51.431235  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:51.431691  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:51.931607  161014 type.go:168] "Request Body" body=""
	I1009 19:10:51.931704  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:51.932066  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:52.430855  161014 type.go:168] "Request Body" body=""
	I1009 19:10:52.430936  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:52.431352  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:52.931144  161014 type.go:168] "Request Body" body=""
	I1009 19:10:52.931236  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:52.931629  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:52.931694  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:53.431504  161014 type.go:168] "Request Body" body=""
	I1009 19:10:53.431592  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:53.431978  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:53.930879  161014 type.go:168] "Request Body" body=""
	I1009 19:10:53.930990  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:53.931420  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:54.431176  161014 type.go:168] "Request Body" body=""
	I1009 19:10:54.431256  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:54.431696  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:54.931517  161014 type.go:168] "Request Body" body=""
	I1009 19:10:54.931600  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:54.932006  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:54.932070  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:55.431919  161014 type.go:168] "Request Body" body=""
	I1009 19:10:55.432013  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:55.432499  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:55.931252  161014 type.go:168] "Request Body" body=""
	I1009 19:10:55.931340  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:55.931770  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:56.431601  161014 type.go:168] "Request Body" body=""
	I1009 19:10:56.431705  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:56.432063  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:56.930857  161014 type.go:168] "Request Body" body=""
	I1009 19:10:56.930944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:56.931308  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:57.431063  161014 type.go:168] "Request Body" body=""
	I1009 19:10:57.431152  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:57.431557  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:57.431627  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:57.931435  161014 type.go:168] "Request Body" body=""
	I1009 19:10:57.931520  161014 node_ready.go:38] duration metric: took 6m0.000788191s for node "functional-158523" to be "Ready" ...
	I1009 19:10:57.934316  161014 out.go:203] 
	W1009 19:10:57.935818  161014 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:10:57.935834  161014 out.go:285] * 
	W1009 19:10:57.937485  161014 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:10:57.938875  161014 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:10:51 functional-158523 crio[2962]: time="2025-10-09T19:10:51.642256781Z" level=info msg="createCtr: removing container 2490de6b39f748d402af3495e43fc05576eccbab98ebd5bbfdff943d4e40f275" id=da0b9f6c-a2b3-428e-9d4b-fa205d5f27f7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:51 functional-158523 crio[2962]: time="2025-10-09T19:10:51.642289335Z" level=info msg="createCtr: deleting container 2490de6b39f748d402af3495e43fc05576eccbab98ebd5bbfdff943d4e40f275 from storage" id=da0b9f6c-a2b3-428e-9d4b-fa205d5f27f7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:51 functional-158523 crio[2962]: time="2025-10-09T19:10:51.644687569Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-158523_kube-system_08a2248bbc397b9ed927890e2073120b_0" id=da0b9f6c-a2b3-428e-9d4b-fa205d5f27f7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.619676195Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=c07a27ea-6a5f-460c-a647-b32add76a687 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.620620769Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=2f7e3b10-f004-4065-9292-3fe1f8b15f45 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.621576231Z" level=info msg="Creating container: kube-system/etcd-functional-158523/etcd" id=d3bb353b-5939-49f6-9490-9518c0763165 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.621810647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.624939514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.625350062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.640320483Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d3bb353b-5939-49f6-9490-9518c0763165 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.641756335Z" level=info msg="createCtr: deleting container ID b74958fde2f0fcf56a952a9f8f9e70895129cc6d7951fe9eb8b5202c0a41081b from idIndex" id=d3bb353b-5939-49f6-9490-9518c0763165 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.641796746Z" level=info msg="createCtr: removing container b74958fde2f0fcf56a952a9f8f9e70895129cc6d7951fe9eb8b5202c0a41081b" id=d3bb353b-5939-49f6-9490-9518c0763165 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.641836923Z" level=info msg="createCtr: deleting container b74958fde2f0fcf56a952a9f8f9e70895129cc6d7951fe9eb8b5202c0a41081b from storage" id=d3bb353b-5939-49f6-9490-9518c0763165 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.644007781Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-158523_kube-system_8f4f9df5924bbaa4e1ec7f60e6576647_0" id=d3bb353b-5939-49f6-9490-9518c0763165 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.61882509Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=9e80cc50-ec3a-4aab-92b0-554cd819b949 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.619862164Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b0596507-769f-4dd5-9ef6-c22f860962cd name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.620798167Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-158523/kube-apiserver" id=6a2a7b62-51c2-4e09-bb0d-30b3eebccef2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.621043823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.624221756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.624620837Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.639591129Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6a2a7b62-51c2-4e09-bb0d-30b3eebccef2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.64106485Z" level=info msg="createCtr: deleting container ID 598cca6b8fe371ebc8cdb0f104a2eef6aa85302a4515ee5a3d9d0ad354a967f4 from idIndex" id=6a2a7b62-51c2-4e09-bb0d-30b3eebccef2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.641107944Z" level=info msg="createCtr: removing container 598cca6b8fe371ebc8cdb0f104a2eef6aa85302a4515ee5a3d9d0ad354a967f4" id=6a2a7b62-51c2-4e09-bb0d-30b3eebccef2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.641141296Z" level=info msg="createCtr: deleting container 598cca6b8fe371ebc8cdb0f104a2eef6aa85302a4515ee5a3d9d0ad354a967f4 from storage" id=6a2a7b62-51c2-4e09-bb0d-30b3eebccef2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.643279067Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-158523_kube-system_bbd906eec6f9b7c1a1a340fc9a9fdcd1_0" id=6a2a7b62-51c2-4e09-bb0d-30b3eebccef2 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:10:59.714010    4376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:10:59.714599    4376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:10:59.716192    4376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:10:59.716688    4376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:10:59.718273    4376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:10:59 up 53 min,  0 user,  load average: 0.00, 0.13, 9.35
	Linux functional-158523 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:10:51 functional-158523 kubelet[1810]: E1009 19:10:51.656307    1810 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-158523\" not found"
	Oct 09 19:10:52 functional-158523 kubelet[1810]: E1009 19:10:52.308572    1810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-158523?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 19:10:52 functional-158523 kubelet[1810]: I1009 19:10:52.516174    1810 kubelet_node_status.go:75] "Attempting to register node" node="functional-158523"
	Oct 09 19:10:52 functional-158523 kubelet[1810]: E1009 19:10:52.516602    1810 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-158523"
	Oct 09 19:10:53 functional-158523 kubelet[1810]: E1009 19:10:53.619138    1810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:10:53 functional-158523 kubelet[1810]: E1009 19:10:53.644430    1810 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:10:53 functional-158523 kubelet[1810]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:10:53 functional-158523 kubelet[1810]:  > podSandboxID="c5f59cf39316c74dd65d2925d309cbd6e6fdc48c022b61803b3c6d8d973e588c"
	Oct 09 19:10:53 functional-158523 kubelet[1810]: E1009 19:10:53.644560    1810 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:10:53 functional-158523 kubelet[1810]:         container etcd start failed in pod etcd-functional-158523_kube-system(8f4f9df5924bbaa4e1ec7f60e6576647): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:10:53 functional-158523 kubelet[1810]:  > logger="UnhandledError"
	Oct 09 19:10:53 functional-158523 kubelet[1810]: E1009 19:10:53.644607    1810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-158523" podUID="8f4f9df5924bbaa4e1ec7f60e6576647"
	Oct 09 19:10:56 functional-158523 kubelet[1810]: E1009 19:10:56.592982    1810 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-158523.186ce7d3e1d25377\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-158523.186ce7d3e1d25377  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-158523,UID:functional-158523,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-158523 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-158523,},FirstTimestamp:2025-10-09 19:00:51.607794551 +0000 UTC m=+0.591054211,LastTimestamp:2025-10-09 19:00:51.609818572 +0000 UTC m=+0.593078239,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-158523,}"
	Oct 09 19:10:56 functional-158523 kubelet[1810]: E1009 19:10:56.618286    1810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:10:56 functional-158523 kubelet[1810]: E1009 19:10:56.643583    1810 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:10:56 functional-158523 kubelet[1810]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:10:56 functional-158523 kubelet[1810]:  > podSandboxID="e6a4bc1b2df9d751888af8288e7c4c569afb0335567fe2f74c173dbe4e47f513"
	Oct 09 19:10:56 functional-158523 kubelet[1810]: E1009 19:10:56.643725    1810 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:10:56 functional-158523 kubelet[1810]:         container kube-apiserver start failed in pod kube-apiserver-functional-158523_kube-system(bbd906eec6f9b7c1a1a340fc9a9fdcd1): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:10:56 functional-158523 kubelet[1810]:  > logger="UnhandledError"
	Oct 09 19:10:56 functional-158523 kubelet[1810]: E1009 19:10:56.643761    1810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-158523" podUID="bbd906eec6f9b7c1a1a340fc9a9fdcd1"
	Oct 09 19:10:59 functional-158523 kubelet[1810]: E1009 19:10:59.309466    1810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-158523?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 19:10:59 functional-158523 kubelet[1810]: E1009 19:10:59.406184    1810 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 09 19:10:59 functional-158523 kubelet[1810]: I1009 19:10:59.517992    1810 kubelet_node_status.go:75] "Attempting to register node" node="functional-158523"
	Oct 09 19:10:59 functional-158523 kubelet[1810]: E1009 19:10:59.518407    1810 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-158523"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523: exit status 2 (312.443864ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-158523" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (366.78s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-158523 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-158523 get po -A: exit status 1 (58.864801ms)

                                                
                                                
** stderr ** 
	E1009 19:11:00.703107  164662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:11:00.703524  164662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:11:00.704958  164662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:11:00.705301  164662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:11:00.706752  164662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-158523 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1009 19:11:00.703107  164662 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1009 19:11:00.703524  164662 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1009 19:11:00.704958  164662 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1009 19:11:00.705301  164662 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1009 19:11:00.706752  164662 memc
ache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nThe connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-158523 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-158523 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-158523
helpers_test.go:243: (dbg) docker inspect functional-158523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	        "Created": "2025-10-09T18:56:39.519997973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:56:39.555178961Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hosts",
	        "LogPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4-json.log",
	        "Name": "/functional-158523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-158523:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-158523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	                "LowerDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-158523",
	                "Source": "/var/lib/docker/volumes/functional-158523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-158523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-158523",
	                "name.minikube.sigs.k8s.io": "functional-158523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50f61b8c99f766763e9e30d37584e537ddf10f74215a382a4221cbe7f7c2e821",
	            "SandboxKey": "/var/run/docker/netns/50f61b8c99f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-158523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:a1:34:24:78:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6347eb7b6b2842e266f51d3873c8aa169a4287ea52510c3d62dc4ed41012963c",
	                    "EndpointID": "eb1ada23cbc058d52037dd94574627dd1b0fedf455f291e656f050bf06881952",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-158523",
	                        "dea3735c567c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523: exit status 2 (306.365158ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 logs -n 25
helpers_test.go:260: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-484045                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-484045   │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ start   │ --download-only -p download-docker-070263 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-070263 │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ delete  │ -p download-docker-070263                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-070263 │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ start   │ --download-only -p binary-mirror-721152 --alsologtostderr --binary-mirror http://127.0.0.1:36453 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-721152   │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ delete  │ -p binary-mirror-721152                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-721152   │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ addons  │ disable dashboard -p addons-139298                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-139298          │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ addons  │ enable dashboard -p addons-139298                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-139298          │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ start   │ -p addons-139298 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-139298          │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ delete  │ -p addons-139298                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-139298          │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │ 09 Oct 25 18:48 UTC │
	│ start   │ -p nospam-656427 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-656427 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │                     │
	│ start   │ nospam-656427 --log_dir /tmp/nospam-656427 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ nospam-656427 --log_dir /tmp/nospam-656427 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ nospam-656427 --log_dir /tmp/nospam-656427 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ pause   │ nospam-656427 --log_dir /tmp/nospam-656427 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ pause   │ nospam-656427 --log_dir /tmp/nospam-656427 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ pause   │ nospam-656427 --log_dir /tmp/nospam-656427 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ delete  │ -p nospam-656427                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-656427          │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p functional-158523 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-158523      │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ -p functional-158523 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-158523      │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:04:53
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:04:53.859600  161014 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:04:53.859894  161014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:53.859904  161014 out.go:374] Setting ErrFile to fd 2...
	I1009 19:04:53.859909  161014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:53.860103  161014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:04:53.860622  161014 out.go:368] Setting JSON to false
	I1009 19:04:53.861569  161014 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2843,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:04:53.861680  161014 start.go:143] virtualization: kvm guest
	I1009 19:04:53.864538  161014 out.go:179] * [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:04:53.866020  161014 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:04:53.866041  161014 notify.go:221] Checking for updates...
	I1009 19:04:53.868520  161014 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:04:53.869799  161014 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:53.871001  161014 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:04:53.872350  161014 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:04:53.873695  161014 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:04:53.875515  161014 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:53.875628  161014 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:04:53.899122  161014 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:04:53.899239  161014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:04:53.961702  161014 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:04:53.950772825 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:04:53.961810  161014 docker.go:319] overlay module found
	I1009 19:04:53.963901  161014 out.go:179] * Using the docker driver based on existing profile
	I1009 19:04:53.965359  161014 start.go:309] selected driver: docker
	I1009 19:04:53.965397  161014 start.go:930] validating driver "docker" against &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:04:53.965505  161014 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:04:53.965601  161014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:04:54.024534  161014 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:04:54.014787007 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:04:54.025138  161014 cni.go:84] Creating CNI manager for ""
	I1009 19:04:54.025189  161014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:04:54.025246  161014 start.go:353] cluster config:
	{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:04:54.027519  161014 out.go:179] * Starting "functional-158523" primary control-plane node in "functional-158523" cluster
	I1009 19:04:54.028967  161014 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:04:54.030473  161014 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:04:54.031821  161014 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:04:54.031876  161014 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:04:54.031885  161014 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:04:54.031986  161014 cache.go:58] Caching tarball of preloaded images
	I1009 19:04:54.032085  161014 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:04:54.032098  161014 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:04:54.032213  161014 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/config.json ...
	I1009 19:04:54.053026  161014 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:04:54.053045  161014 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:04:54.053063  161014 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:04:54.053096  161014 start.go:361] acquireMachinesLock for functional-158523: {Name:mk995713bbd40419f859c4a8640c8ada0479020c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:04:54.053186  161014 start.go:365] duration metric: took 46.429µs to acquireMachinesLock for "functional-158523"
	I1009 19:04:54.053209  161014 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:04:54.053220  161014 fix.go:55] fixHost starting: 
	I1009 19:04:54.053511  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:54.070674  161014 fix.go:113] recreateIfNeeded on functional-158523: state=Running err=<nil>
	W1009 19:04:54.070714  161014 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:04:54.072611  161014 out.go:252] * Updating the running docker "functional-158523" container ...
	I1009 19:04:54.072644  161014 machine.go:93] provisionDockerMachine start ...
	I1009 19:04:54.072732  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:54.089158  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:54.089398  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:54.089417  161014 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:04:54.234516  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 19:04:54.234543  161014 ubuntu.go:182] provisioning hostname "functional-158523"
	I1009 19:04:54.234606  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:54.252690  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:54.252942  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:54.252960  161014 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-158523 && echo "functional-158523" | sudo tee /etc/hostname
	I1009 19:04:54.409130  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 19:04:54.409240  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:54.428592  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:54.428819  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:54.428839  161014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-158523' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-158523/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-158523' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:04:54.575221  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:04:54.575248  161014 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:04:54.575298  161014 ubuntu.go:190] setting up certificates
	I1009 19:04:54.575313  161014 provision.go:84] configureAuth start
	I1009 19:04:54.575366  161014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 19:04:54.593157  161014 provision.go:143] copyHostCerts
	I1009 19:04:54.593200  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:04:54.593229  161014 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:04:54.593244  161014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:04:54.593315  161014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:04:54.593491  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:04:54.593517  161014 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:04:54.593524  161014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:04:54.593557  161014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:04:54.593615  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:04:54.593632  161014 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:04:54.593638  161014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:04:54.593693  161014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:04:54.593752  161014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.functional-158523 san=[127.0.0.1 192.168.49.2 functional-158523 localhost minikube]
	I1009 19:04:54.998231  161014 provision.go:177] copyRemoteCerts
	I1009 19:04:54.998297  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:04:54.998335  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.016505  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.120020  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:04:55.120077  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:04:55.138116  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:04:55.138187  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:04:55.157031  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:04:55.157100  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:04:55.176045  161014 provision.go:87] duration metric: took 600.715143ms to configureAuth
	I1009 19:04:55.176080  161014 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:04:55.176245  161014 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:55.176357  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.194450  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:55.194679  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:55.194701  161014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:04:55.467764  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:04:55.467789  161014 machine.go:96] duration metric: took 1.395134259s to provisionDockerMachine
	I1009 19:04:55.467804  161014 start.go:294] postStartSetup for "functional-158523" (driver="docker")
	I1009 19:04:55.467821  161014 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:04:55.467882  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:04:55.467922  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.486353  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.591117  161014 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:04:55.594855  161014 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1009 19:04:55.594886  161014 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1009 19:04:55.594893  161014 command_runner.go:130] > VERSION_ID="12"
	I1009 19:04:55.594900  161014 command_runner.go:130] > VERSION="12 (bookworm)"
	I1009 19:04:55.594907  161014 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1009 19:04:55.594911  161014 command_runner.go:130] > ID=debian
	I1009 19:04:55.594915  161014 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1009 19:04:55.594920  161014 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1009 19:04:55.594926  161014 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1009 19:04:55.594992  161014 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:04:55.595011  161014 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:04:55.595023  161014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:04:55.595090  161014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:04:55.595204  161014 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:04:55.595227  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:04:55.595320  161014 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts -> hosts in /etc/test/nested/copy/141519
	I1009 19:04:55.595330  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts -> /etc/test/nested/copy/141519/hosts
	I1009 19:04:55.595388  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/141519
	I1009 19:04:55.603244  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:04:55.621701  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts --> /etc/test/nested/copy/141519/hosts (40 bytes)
	I1009 19:04:55.640532  161014 start.go:297] duration metric: took 172.708538ms for postStartSetup
	I1009 19:04:55.640625  161014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:04:55.640672  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.658424  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.758913  161014 command_runner.go:130] > 38%
	I1009 19:04:55.759004  161014 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:04:55.763762  161014 command_runner.go:130] > 182G
	I1009 19:04:55.763807  161014 fix.go:57] duration metric: took 1.710584464s for fixHost
	I1009 19:04:55.763821  161014 start.go:84] releasing machines lock for "functional-158523", held for 1.710622732s
	I1009 19:04:55.763882  161014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 19:04:55.781557  161014 ssh_runner.go:195] Run: cat /version.json
	I1009 19:04:55.781620  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.781568  161014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:04:55.781740  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.800026  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.800289  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.899840  161014 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1009 19:04:55.953125  161014 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 19:04:55.955421  161014 ssh_runner.go:195] Run: systemctl --version
	I1009 19:04:55.962169  161014 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1009 19:04:55.962207  161014 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1009 19:04:55.962422  161014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:04:56.001789  161014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 19:04:56.006364  161014 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 19:04:56.006710  161014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:04:56.006818  161014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:04:56.015207  161014 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:04:56.015234  161014 start.go:496] detecting cgroup driver to use...
	I1009 19:04:56.015270  161014 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:04:56.015326  161014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:04:56.030444  161014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:04:56.043355  161014 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:04:56.043439  161014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:04:56.058903  161014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:04:56.072794  161014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:04:56.155598  161014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:04:56.243484  161014 docker.go:234] disabling docker service ...
	I1009 19:04:56.243560  161014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:04:56.258472  161014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:04:56.271168  161014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:04:56.357916  161014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:04:56.444044  161014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:04:56.457436  161014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:04:56.471973  161014 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1009 19:04:56.472020  161014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:04:56.472074  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.481231  161014 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:04:56.481304  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.490735  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.499743  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.508857  161014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:04:56.517176  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.525878  161014 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.534146  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.542852  161014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:04:56.549944  161014 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 19:04:56.550015  161014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:04:56.557444  161014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:04:56.640120  161014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:04:56.755858  161014 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:04:56.755937  161014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:04:56.760115  161014 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1009 19:04:56.760139  161014 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 19:04:56.760145  161014 command_runner.go:130] > Device: 0,59	Inode: 3908        Links: 1
	I1009 19:04:56.760152  161014 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 19:04:56.760157  161014 command_runner.go:130] > Access: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760162  161014 command_runner.go:130] > Modify: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760167  161014 command_runner.go:130] > Change: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760171  161014 command_runner.go:130] >  Birth: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760191  161014 start.go:564] Will wait 60s for crictl version
	I1009 19:04:56.760238  161014 ssh_runner.go:195] Run: which crictl
	I1009 19:04:56.764068  161014 command_runner.go:130] > /usr/local/bin/crictl
	I1009 19:04:56.764145  161014 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:04:56.790045  161014 command_runner.go:130] > Version:  0.1.0
	I1009 19:04:56.790068  161014 command_runner.go:130] > RuntimeName:  cri-o
	I1009 19:04:56.790072  161014 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1009 19:04:56.790077  161014 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 19:04:56.790095  161014 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:04:56.790164  161014 ssh_runner.go:195] Run: crio --version
	I1009 19:04:56.817435  161014 command_runner.go:130] > crio version 1.34.1
	I1009 19:04:56.817460  161014 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 19:04:56.817466  161014 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 19:04:56.817470  161014 command_runner.go:130] >    GitTreeState:   dirty
	I1009 19:04:56.817475  161014 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 19:04:56.817480  161014 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 19:04:56.817483  161014 command_runner.go:130] >    Compiler:       gc
	I1009 19:04:56.817488  161014 command_runner.go:130] >    Platform:       linux/amd64
	I1009 19:04:56.817492  161014 command_runner.go:130] >    Linkmode:       static
	I1009 19:04:56.817496  161014 command_runner.go:130] >    BuildTags:
	I1009 19:04:56.817499  161014 command_runner.go:130] >      static
	I1009 19:04:56.817503  161014 command_runner.go:130] >      netgo
	I1009 19:04:56.817506  161014 command_runner.go:130] >      osusergo
	I1009 19:04:56.817510  161014 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 19:04:56.817514  161014 command_runner.go:130] >      seccomp
	I1009 19:04:56.817518  161014 command_runner.go:130] >      apparmor
	I1009 19:04:56.817521  161014 command_runner.go:130] >      selinux
	I1009 19:04:56.817525  161014 command_runner.go:130] >    LDFlags:          unknown
	I1009 19:04:56.817531  161014 command_runner.go:130] >    SeccompEnabled:   true
	I1009 19:04:56.817535  161014 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 19:04:56.819047  161014 ssh_runner.go:195] Run: crio --version
	I1009 19:04:56.846110  161014 command_runner.go:130] > crio version 1.34.1
	I1009 19:04:56.846137  161014 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 19:04:56.846145  161014 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 19:04:56.846154  161014 command_runner.go:130] >    GitTreeState:   dirty
	I1009 19:04:56.846160  161014 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 19:04:56.846166  161014 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 19:04:56.846172  161014 command_runner.go:130] >    Compiler:       gc
	I1009 19:04:56.846179  161014 command_runner.go:130] >    Platform:       linux/amd64
	I1009 19:04:56.846185  161014 command_runner.go:130] >    Linkmode:       static
	I1009 19:04:56.846193  161014 command_runner.go:130] >    BuildTags:
	I1009 19:04:56.846202  161014 command_runner.go:130] >      static
	I1009 19:04:56.846209  161014 command_runner.go:130] >      netgo
	I1009 19:04:56.846218  161014 command_runner.go:130] >      osusergo
	I1009 19:04:56.846226  161014 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 19:04:56.846238  161014 command_runner.go:130] >      seccomp
	I1009 19:04:56.846246  161014 command_runner.go:130] >      apparmor
	I1009 19:04:56.846252  161014 command_runner.go:130] >      selinux
	I1009 19:04:56.846262  161014 command_runner.go:130] >    LDFlags:          unknown
	I1009 19:04:56.846270  161014 command_runner.go:130] >    SeccompEnabled:   true
	I1009 19:04:56.846280  161014 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 19:04:56.849910  161014 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:04:56.851471  161014 cli_runner.go:164] Run: docker network inspect functional-158523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:04:56.867982  161014 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:04:56.872517  161014 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1009 19:04:56.872627  161014 kubeadm.go:883] updating cluster {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:04:56.872731  161014 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:04:56.872790  161014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:04:56.904568  161014 command_runner.go:130] > {
	I1009 19:04:56.904591  161014 command_runner.go:130] >   "images":  [
	I1009 19:04:56.904595  161014 command_runner.go:130] >     {
	I1009 19:04:56.904603  161014 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 19:04:56.904608  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904617  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 19:04:56.904622  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904628  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.904652  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 19:04:56.904667  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 19:04:56.904673  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904681  161014 command_runner.go:130] >       "size":  "109379124",
	I1009 19:04:56.904688  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.904694  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.904700  161014 command_runner.go:130] >     },
	I1009 19:04:56.904706  161014 command_runner.go:130] >     {
	I1009 19:04:56.904719  161014 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 19:04:56.904728  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904736  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 19:04:56.904744  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904754  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.904771  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 19:04:56.904786  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 19:04:56.904794  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904799  161014 command_runner.go:130] >       "size":  "31470524",
	I1009 19:04:56.904805  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.904814  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.904822  161014 command_runner.go:130] >     },
	I1009 19:04:56.904831  161014 command_runner.go:130] >     {
	I1009 19:04:56.904841  161014 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 19:04:56.904851  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904861  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 19:04:56.904870  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904879  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.904890  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 19:04:56.904903  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 19:04:56.904912  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904919  161014 command_runner.go:130] >       "size":  "76103547",
	I1009 19:04:56.904928  161014 command_runner.go:130] >       "username":  "nonroot",
	I1009 19:04:56.904938  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.904946  161014 command_runner.go:130] >     },
	I1009 19:04:56.904951  161014 command_runner.go:130] >     {
	I1009 19:04:56.904963  161014 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 19:04:56.904972  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904982  161014 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 19:04:56.904988  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904994  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905015  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 19:04:56.905029  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 19:04:56.905038  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905048  161014 command_runner.go:130] >       "size":  "195976448",
	I1009 19:04:56.905056  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905062  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905071  161014 command_runner.go:130] >       },
	I1009 19:04:56.905082  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905092  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905096  161014 command_runner.go:130] >     },
	I1009 19:04:56.905099  161014 command_runner.go:130] >     {
	I1009 19:04:56.905111  161014 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 19:04:56.905120  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905128  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 19:04:56.905137  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905147  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905160  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 19:04:56.905174  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 19:04:56.905182  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905188  161014 command_runner.go:130] >       "size":  "89046001",
	I1009 19:04:56.905195  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905199  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905207  161014 command_runner.go:130] >       },
	I1009 19:04:56.905218  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905228  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905235  161014 command_runner.go:130] >     },
	I1009 19:04:56.905240  161014 command_runner.go:130] >     {
	I1009 19:04:56.905253  161014 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 19:04:56.905262  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905273  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 19:04:56.905280  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905284  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905299  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 19:04:56.905315  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 19:04:56.905324  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905333  161014 command_runner.go:130] >       "size":  "76004181",
	I1009 19:04:56.905342  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905352  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905360  161014 command_runner.go:130] >       },
	I1009 19:04:56.905367  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905393  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905402  161014 command_runner.go:130] >     },
	I1009 19:04:56.905407  161014 command_runner.go:130] >     {
	I1009 19:04:56.905417  161014 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 19:04:56.905427  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905438  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 19:04:56.905446  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905456  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905470  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 19:04:56.905482  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 19:04:56.905490  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905500  161014 command_runner.go:130] >       "size":  "73138073",
	I1009 19:04:56.905510  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905516  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905525  161014 command_runner.go:130] >     },
	I1009 19:04:56.905533  161014 command_runner.go:130] >     {
	I1009 19:04:56.905543  161014 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 19:04:56.905552  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905563  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 19:04:56.905571  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905579  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905590  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 19:04:56.905613  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 19:04:56.905622  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905629  161014 command_runner.go:130] >       "size":  "53844823",
	I1009 19:04:56.905637  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905647  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905655  161014 command_runner.go:130] >       },
	I1009 19:04:56.905664  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905673  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905681  161014 command_runner.go:130] >     },
	I1009 19:04:56.905690  161014 command_runner.go:130] >     {
	I1009 19:04:56.905696  161014 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 19:04:56.905705  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905712  161014 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 19:04:56.905721  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905727  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905740  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 19:04:56.905754  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 19:04:56.905762  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905772  161014 command_runner.go:130] >       "size":  "742092",
	I1009 19:04:56.905783  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905791  161014 command_runner.go:130] >         "value":  "65535"
	I1009 19:04:56.905795  161014 command_runner.go:130] >       },
	I1009 19:04:56.905802  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905808  161014 command_runner.go:130] >       "pinned":  true
	I1009 19:04:56.905816  161014 command_runner.go:130] >     }
	I1009 19:04:56.905822  161014 command_runner.go:130] >   ]
	I1009 19:04:56.905830  161014 command_runner.go:130] > }
	I1009 19:04:56.906014  161014 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:04:56.906027  161014 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:04:56.906079  161014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:04:56.933720  161014 command_runner.go:130] > {
	I1009 19:04:56.933747  161014 command_runner.go:130] >   "images":  [
	I1009 19:04:56.933753  161014 command_runner.go:130] >     {
	I1009 19:04:56.933769  161014 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 19:04:56.933774  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.933781  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 19:04:56.933788  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933794  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.933805  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 19:04:56.933821  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 19:04:56.933827  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933835  161014 command_runner.go:130] >       "size":  "109379124",
	I1009 19:04:56.933845  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.933855  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.933861  161014 command_runner.go:130] >     },
	I1009 19:04:56.933864  161014 command_runner.go:130] >     {
	I1009 19:04:56.933873  161014 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 19:04:56.933879  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.933890  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 19:04:56.933899  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933906  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.933921  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 19:04:56.933935  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 19:04:56.933944  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933951  161014 command_runner.go:130] >       "size":  "31470524",
	I1009 19:04:56.933960  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.933970  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.933975  161014 command_runner.go:130] >     },
	I1009 19:04:56.933979  161014 command_runner.go:130] >     {
	I1009 19:04:56.933992  161014 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 19:04:56.934002  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934016  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 19:04:56.934029  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934036  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934050  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 19:04:56.934065  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 19:04:56.934072  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934079  161014 command_runner.go:130] >       "size":  "76103547",
	I1009 19:04:56.934086  161014 command_runner.go:130] >       "username":  "nonroot",
	I1009 19:04:56.934090  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934097  161014 command_runner.go:130] >     },
	I1009 19:04:56.934102  161014 command_runner.go:130] >     {
	I1009 19:04:56.934116  161014 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 19:04:56.934126  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934137  161014 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 19:04:56.934145  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934151  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934164  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 19:04:56.934177  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 19:04:56.934183  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934188  161014 command_runner.go:130] >       "size":  "195976448",
	I1009 19:04:56.934197  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934207  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934216  161014 command_runner.go:130] >       },
	I1009 19:04:56.934263  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934275  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934279  161014 command_runner.go:130] >     },
	I1009 19:04:56.934283  161014 command_runner.go:130] >     {
	I1009 19:04:56.934296  161014 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 19:04:56.934306  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934315  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 19:04:56.934323  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934329  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934344  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 19:04:56.934358  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 19:04:56.934372  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934397  161014 command_runner.go:130] >       "size":  "89046001",
	I1009 19:04:56.934408  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934416  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934425  161014 command_runner.go:130] >       },
	I1009 19:04:56.934435  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934444  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934452  161014 command_runner.go:130] >     },
	I1009 19:04:56.934461  161014 command_runner.go:130] >     {
	I1009 19:04:56.934473  161014 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 19:04:56.934480  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934486  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 19:04:56.934493  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934499  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934514  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 19:04:56.934529  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 19:04:56.934538  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934545  161014 command_runner.go:130] >       "size":  "76004181",
	I1009 19:04:56.934554  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934560  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934566  161014 command_runner.go:130] >       },
	I1009 19:04:56.934572  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934578  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934581  161014 command_runner.go:130] >     },
	I1009 19:04:56.934584  161014 command_runner.go:130] >     {
	I1009 19:04:56.934592  161014 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 19:04:56.934597  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934605  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 19:04:56.934610  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934616  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934629  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 19:04:56.934643  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 19:04:56.934652  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934660  161014 command_runner.go:130] >       "size":  "73138073",
	I1009 19:04:56.934667  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934677  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934681  161014 command_runner.go:130] >     },
	I1009 19:04:56.934684  161014 command_runner.go:130] >     {
	I1009 19:04:56.934690  161014 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 19:04:56.934696  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934704  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 19:04:56.934709  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934716  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934726  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 19:04:56.934747  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 19:04:56.934753  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934772  161014 command_runner.go:130] >       "size":  "53844823",
	I1009 19:04:56.934779  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934786  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934795  161014 command_runner.go:130] >       },
	I1009 19:04:56.934801  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934811  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934816  161014 command_runner.go:130] >     },
	I1009 19:04:56.934824  161014 command_runner.go:130] >     {
	I1009 19:04:56.934834  161014 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 19:04:56.934843  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934850  161014 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 19:04:56.934858  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934862  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934871  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 19:04:56.934886  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 19:04:56.934895  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934902  161014 command_runner.go:130] >       "size":  "742092",
	I1009 19:04:56.934910  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934917  161014 command_runner.go:130] >         "value":  "65535"
	I1009 19:04:56.934926  161014 command_runner.go:130] >       },
	I1009 19:04:56.934934  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934943  161014 command_runner.go:130] >       "pinned":  true
	I1009 19:04:56.934947  161014 command_runner.go:130] >     }
	I1009 19:04:56.934950  161014 command_runner.go:130] >   ]
	I1009 19:04:56.934953  161014 command_runner.go:130] > }
	I1009 19:04:56.935095  161014 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:04:56.935110  161014 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:04:56.935118  161014 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 19:04:56.935242  161014 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-158523 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:04:56.935323  161014 ssh_runner.go:195] Run: crio config
	I1009 19:04:56.978304  161014 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1009 19:04:56.978336  161014 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1009 19:04:56.978345  161014 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1009 19:04:56.978350  161014 command_runner.go:130] > #
	I1009 19:04:56.978359  161014 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1009 19:04:56.978367  161014 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1009 19:04:56.978390  161014 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1009 19:04:56.978401  161014 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1009 19:04:56.978406  161014 command_runner.go:130] > # reload'.
	I1009 19:04:56.978415  161014 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1009 19:04:56.978436  161014 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1009 19:04:56.978448  161014 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1009 19:04:56.978458  161014 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1009 19:04:56.978464  161014 command_runner.go:130] > [crio]
	I1009 19:04:56.978476  161014 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1009 19:04:56.978484  161014 command_runner.go:130] > # containers images, in this directory.
	I1009 19:04:56.978495  161014 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1009 19:04:56.978505  161014 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1009 19:04:56.978514  161014 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1009 19:04:56.978523  161014 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1009 19:04:56.978532  161014 command_runner.go:130] > # imagestore = ""
	I1009 19:04:56.978541  161014 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1009 19:04:56.978554  161014 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1009 19:04:56.978561  161014 command_runner.go:130] > # storage_driver = "overlay"
	I1009 19:04:56.978571  161014 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1009 19:04:56.978581  161014 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1009 19:04:56.978591  161014 command_runner.go:130] > # storage_option = [
	I1009 19:04:56.978596  161014 command_runner.go:130] > # ]
	I1009 19:04:56.978605  161014 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1009 19:04:56.978616  161014 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1009 19:04:56.978623  161014 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1009 19:04:56.978631  161014 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1009 19:04:56.978640  161014 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1009 19:04:56.978647  161014 command_runner.go:130] > # always happen on a node reboot
	I1009 19:04:56.978654  161014 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1009 19:04:56.978669  161014 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1009 19:04:56.978682  161014 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1009 19:04:56.978689  161014 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1009 19:04:56.978695  161014 command_runner.go:130] > # version_file_persist = ""
	I1009 19:04:56.978714  161014 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1009 19:04:56.978728  161014 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1009 19:04:56.978737  161014 command_runner.go:130] > # internal_wipe = true
	I1009 19:04:56.978748  161014 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1009 19:04:56.978760  161014 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1009 19:04:56.978772  161014 command_runner.go:130] > # internal_repair = true
	I1009 19:04:56.978780  161014 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1009 19:04:56.978794  161014 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1009 19:04:56.978805  161014 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1009 19:04:56.978815  161014 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1009 19:04:56.978825  161014 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1009 19:04:56.978833  161014 command_runner.go:130] > [crio.api]
	I1009 19:04:56.978841  161014 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1009 19:04:56.978851  161014 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1009 19:04:56.978860  161014 command_runner.go:130] > # IP address on which the stream server will listen.
	I1009 19:04:56.978870  161014 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1009 19:04:56.978881  161014 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1009 19:04:56.978892  161014 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1009 19:04:56.978901  161014 command_runner.go:130] > # stream_port = "0"
	I1009 19:04:56.978910  161014 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1009 19:04:56.978920  161014 command_runner.go:130] > # stream_enable_tls = false
	I1009 19:04:56.978929  161014 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1009 19:04:56.978954  161014 command_runner.go:130] > # stream_idle_timeout = ""
	I1009 19:04:56.978969  161014 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1009 19:04:56.978978  161014 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1009 19:04:56.978985  161014 command_runner.go:130] > # stream_tls_cert = ""
	I1009 19:04:56.978999  161014 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1009 19:04:56.979007  161014 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1009 19:04:56.979013  161014 command_runner.go:130] > # stream_tls_key = ""
	I1009 19:04:56.979025  161014 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1009 19:04:56.979039  161014 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1009 19:04:56.979049  161014 command_runner.go:130] > # automatically pick up the changes.
	I1009 19:04:56.979058  161014 command_runner.go:130] > # stream_tls_ca = ""
	I1009 19:04:56.979084  161014 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 19:04:56.979098  161014 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1009 19:04:56.979110  161014 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 19:04:56.979117  161014 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1009 19:04:56.979127  161014 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1009 19:04:56.979134  161014 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1009 19:04:56.979139  161014 command_runner.go:130] > [crio.runtime]
	I1009 19:04:56.979146  161014 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1009 19:04:56.979155  161014 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1009 19:04:56.979163  161014 command_runner.go:130] > # "nofile=1024:2048"
	I1009 19:04:56.979177  161014 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1009 19:04:56.979187  161014 command_runner.go:130] > # default_ulimits = [
	I1009 19:04:56.979193  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979206  161014 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1009 19:04:56.979215  161014 command_runner.go:130] > # no_pivot = false
	I1009 19:04:56.979226  161014 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1009 19:04:56.979239  161014 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1009 19:04:56.979251  161014 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1009 19:04:56.979259  161014 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1009 19:04:56.979267  161014 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1009 19:04:56.979277  161014 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 19:04:56.979283  161014 command_runner.go:130] > # conmon = ""
	I1009 19:04:56.979290  161014 command_runner.go:130] > # Cgroup setting for conmon
	I1009 19:04:56.979301  161014 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1009 19:04:56.979311  161014 command_runner.go:130] > conmon_cgroup = "pod"
	I1009 19:04:56.979320  161014 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1009 19:04:56.979327  161014 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1009 19:04:56.979338  161014 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 19:04:56.979347  161014 command_runner.go:130] > # conmon_env = [
	I1009 19:04:56.979353  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979364  161014 command_runner.go:130] > # Additional environment variables to set for all the
	I1009 19:04:56.979392  161014 command_runner.go:130] > # containers. These are overridden if set in the
	I1009 19:04:56.979406  161014 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1009 19:04:56.979412  161014 command_runner.go:130] > # default_env = [
	I1009 19:04:56.979420  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979429  161014 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1009 19:04:56.979443  161014 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1009 19:04:56.979453  161014 command_runner.go:130] > # selinux = false
	I1009 19:04:56.979463  161014 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1009 19:04:56.979479  161014 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1009 19:04:56.979489  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979497  161014 command_runner.go:130] > # seccomp_profile = ""
	I1009 19:04:56.979509  161014 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1009 19:04:56.979522  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979529  161014 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1009 19:04:56.979542  161014 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1009 19:04:56.979555  161014 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1009 19:04:56.979564  161014 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1009 19:04:56.979574  161014 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1009 19:04:56.979585  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979593  161014 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1009 19:04:56.979605  161014 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1009 19:04:56.979615  161014 command_runner.go:130] > # the cgroup blockio controller.
	I1009 19:04:56.979622  161014 command_runner.go:130] > # blockio_config_file = ""
	I1009 19:04:56.979636  161014 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1009 19:04:56.979642  161014 command_runner.go:130] > # blockio parameters.
	I1009 19:04:56.979648  161014 command_runner.go:130] > # blockio_reload = false
	I1009 19:04:56.979658  161014 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1009 19:04:56.979664  161014 command_runner.go:130] > # irqbalance daemon.
	I1009 19:04:56.979672  161014 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1009 19:04:56.979681  161014 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1009 19:04:56.979690  161014 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1009 19:04:56.979700  161014 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1009 19:04:56.979710  161014 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1009 19:04:56.979724  161014 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1009 19:04:56.979731  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979741  161014 command_runner.go:130] > # rdt_config_file = ""
	I1009 19:04:56.979753  161014 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1009 19:04:56.979764  161014 command_runner.go:130] > # cgroup_manager = "systemd"
	I1009 19:04:56.979773  161014 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1009 19:04:56.979783  161014 command_runner.go:130] > # separate_pull_cgroup = ""
	I1009 19:04:56.979791  161014 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1009 19:04:56.979800  161014 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1009 19:04:56.979809  161014 command_runner.go:130] > # will be added.
	I1009 19:04:56.979817  161014 command_runner.go:130] > # default_capabilities = [
	I1009 19:04:56.979826  161014 command_runner.go:130] > # 	"CHOWN",
	I1009 19:04:56.979832  161014 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1009 19:04:56.979840  161014 command_runner.go:130] > # 	"FSETID",
	I1009 19:04:56.979846  161014 command_runner.go:130] > # 	"FOWNER",
	I1009 19:04:56.979855  161014 command_runner.go:130] > # 	"SETGID",
	I1009 19:04:56.979876  161014 command_runner.go:130] > # 	"SETUID",
	I1009 19:04:56.979885  161014 command_runner.go:130] > # 	"SETPCAP",
	I1009 19:04:56.979891  161014 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1009 19:04:56.979901  161014 command_runner.go:130] > # 	"KILL",
	I1009 19:04:56.979906  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979920  161014 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1009 19:04:56.979930  161014 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1009 19:04:56.979950  161014 command_runner.go:130] > # add_inheritable_capabilities = false
	I1009 19:04:56.979963  161014 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1009 19:04:56.979972  161014 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 19:04:56.979977  161014 command_runner.go:130] > default_sysctls = [
	I1009 19:04:56.979993  161014 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1009 19:04:56.979997  161014 command_runner.go:130] > ]
	I1009 19:04:56.980003  161014 command_runner.go:130] > # List of devices on the host that a
	I1009 19:04:56.980010  161014 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1009 19:04:56.980015  161014 command_runner.go:130] > # allowed_devices = [
	I1009 19:04:56.980019  161014 command_runner.go:130] > # 	"/dev/fuse",
	I1009 19:04:56.980024  161014 command_runner.go:130] > # 	"/dev/net/tun",
	I1009 19:04:56.980029  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980035  161014 command_runner.go:130] > # List of additional devices. specified as
	I1009 19:04:56.980047  161014 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1009 19:04:56.980055  161014 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1009 19:04:56.980063  161014 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 19:04:56.980069  161014 command_runner.go:130] > # additional_devices = [
	I1009 19:04:56.980072  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980079  161014 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1009 19:04:56.980084  161014 command_runner.go:130] > # cdi_spec_dirs = [
	I1009 19:04:56.980091  161014 command_runner.go:130] > # 	"/etc/cdi",
	I1009 19:04:56.980097  161014 command_runner.go:130] > # 	"/var/run/cdi",
	I1009 19:04:56.980101  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980111  161014 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1009 19:04:56.980120  161014 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1009 19:04:56.980126  161014 command_runner.go:130] > # Defaults to false.
	I1009 19:04:56.980133  161014 command_runner.go:130] > # device_ownership_from_security_context = false
	I1009 19:04:56.980146  161014 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1009 19:04:56.980157  161014 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1009 19:04:56.980163  161014 command_runner.go:130] > # hooks_dir = [
	I1009 19:04:56.980167  161014 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1009 19:04:56.980173  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980179  161014 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1009 19:04:56.980187  161014 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1009 19:04:56.980192  161014 command_runner.go:130] > # its default mounts from the following two files:
	I1009 19:04:56.980197  161014 command_runner.go:130] > #
	I1009 19:04:56.980202  161014 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1009 19:04:56.980211  161014 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1009 19:04:56.980218  161014 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1009 19:04:56.980221  161014 command_runner.go:130] > #
	I1009 19:04:56.980230  161014 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1009 19:04:56.980236  161014 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1009 19:04:56.980244  161014 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1009 19:04:56.980252  161014 command_runner.go:130] > #      only add mounts it finds in this file.
	I1009 19:04:56.980255  161014 command_runner.go:130] > #
	I1009 19:04:56.980261  161014 command_runner.go:130] > # default_mounts_file = ""
	I1009 19:04:56.980266  161014 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1009 19:04:56.980275  161014 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1009 19:04:56.980281  161014 command_runner.go:130] > # pids_limit = -1
	I1009 19:04:56.980286  161014 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1009 19:04:56.980294  161014 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1009 19:04:56.980300  161014 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1009 19:04:56.980309  161014 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1009 19:04:56.980315  161014 command_runner.go:130] > # log_size_max = -1
	I1009 19:04:56.980322  161014 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1009 19:04:56.980328  161014 command_runner.go:130] > # log_to_journald = false
	I1009 19:04:56.980335  161014 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1009 19:04:56.980341  161014 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1009 19:04:56.980345  161014 command_runner.go:130] > # Path to directory for container attach sockets.
	I1009 19:04:56.980352  161014 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1009 19:04:56.980357  161014 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1009 19:04:56.980365  161014 command_runner.go:130] > # bind_mount_prefix = ""
	I1009 19:04:56.980370  161014 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1009 19:04:56.980376  161014 command_runner.go:130] > # read_only = false
	I1009 19:04:56.980395  161014 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1009 19:04:56.980405  161014 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1009 19:04:56.980413  161014 command_runner.go:130] > # live configuration reload.
	I1009 19:04:56.980417  161014 command_runner.go:130] > # log_level = "info"
	I1009 19:04:56.980425  161014 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1009 19:04:56.980430  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.980435  161014 command_runner.go:130] > # log_filter = ""
	I1009 19:04:56.980441  161014 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1009 19:04:56.980449  161014 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1009 19:04:56.980455  161014 command_runner.go:130] > # separated by comma.
	I1009 19:04:56.980462  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980467  161014 command_runner.go:130] > # uid_mappings = ""
	I1009 19:04:56.980473  161014 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1009 19:04:56.980480  161014 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1009 19:04:56.980486  161014 command_runner.go:130] > # separated by comma.
	I1009 19:04:56.980496  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980502  161014 command_runner.go:130] > # gid_mappings = ""
	I1009 19:04:56.980508  161014 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1009 19:04:56.980516  161014 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 19:04:56.980524  161014 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 19:04:56.980534  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980540  161014 command_runner.go:130] > # minimum_mappable_uid = -1
	I1009 19:04:56.980547  161014 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1009 19:04:56.980556  161014 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 19:04:56.980562  161014 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 19:04:56.980569  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980575  161014 command_runner.go:130] > # minimum_mappable_gid = -1
	I1009 19:04:56.980581  161014 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1009 19:04:56.980588  161014 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1009 19:04:56.980593  161014 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1009 19:04:56.980599  161014 command_runner.go:130] > # ctr_stop_timeout = 30
	I1009 19:04:56.980605  161014 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1009 19:04:56.980612  161014 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1009 19:04:56.980616  161014 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1009 19:04:56.980623  161014 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1009 19:04:56.980627  161014 command_runner.go:130] > # drop_infra_ctr = true
	I1009 19:04:56.980635  161014 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1009 19:04:56.980640  161014 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1009 19:04:56.980649  161014 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1009 19:04:56.980657  161014 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1009 19:04:56.980666  161014 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1009 19:04:56.980674  161014 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1009 19:04:56.980682  161014 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1009 19:04:56.980687  161014 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1009 19:04:56.980695  161014 command_runner.go:130] > # shared_cpuset = ""
	I1009 19:04:56.980703  161014 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1009 19:04:56.980707  161014 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1009 19:04:56.980712  161014 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1009 19:04:56.980719  161014 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1009 19:04:56.980725  161014 command_runner.go:130] > # pinns_path = ""
	I1009 19:04:56.980730  161014 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1009 19:04:56.980738  161014 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1009 19:04:56.980742  161014 command_runner.go:130] > # enable_criu_support = true
	I1009 19:04:56.980749  161014 command_runner.go:130] > # Enable/disable the generation of the container,
	I1009 19:04:56.980754  161014 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1009 19:04:56.980761  161014 command_runner.go:130] > # enable_pod_events = false
	I1009 19:04:56.980767  161014 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 19:04:56.980775  161014 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1009 19:04:56.980779  161014 command_runner.go:130] > # default_runtime = "crun"
	I1009 19:04:56.980785  161014 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1009 19:04:56.980792  161014 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1009 19:04:56.980803  161014 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1009 19:04:56.980809  161014 command_runner.go:130] > # creation as a file is not desired either.
	I1009 19:04:56.980817  161014 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1009 19:04:56.980823  161014 command_runner.go:130] > # the hostname is being managed dynamically.
	I1009 19:04:56.980828  161014 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1009 19:04:56.980831  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980836  161014 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1009 19:04:56.980844  161014 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1009 19:04:56.980850  161014 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1009 19:04:56.980858  161014 command_runner.go:130] > # Each entry in the table should follow the format:
	I1009 19:04:56.980861  161014 command_runner.go:130] > #
	I1009 19:04:56.980865  161014 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1009 19:04:56.980872  161014 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1009 19:04:56.980875  161014 command_runner.go:130] > # runtime_type = "oci"
	I1009 19:04:56.980882  161014 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1009 19:04:56.980887  161014 command_runner.go:130] > # inherit_default_runtime = false
	I1009 19:04:56.980894  161014 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1009 19:04:56.980898  161014 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1009 19:04:56.980902  161014 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1009 19:04:56.980906  161014 command_runner.go:130] > # monitor_env = []
	I1009 19:04:56.980910  161014 command_runner.go:130] > # privileged_without_host_devices = false
	I1009 19:04:56.980917  161014 command_runner.go:130] > # allowed_annotations = []
	I1009 19:04:56.980922  161014 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1009 19:04:56.980928  161014 command_runner.go:130] > # no_sync_log = false
	I1009 19:04:56.980932  161014 command_runner.go:130] > # default_annotations = {}
	I1009 19:04:56.980939  161014 command_runner.go:130] > # stream_websockets = false
	I1009 19:04:56.980949  161014 command_runner.go:130] > # seccomp_profile = ""
	I1009 19:04:56.980985  161014 command_runner.go:130] > # Where:
	I1009 19:04:56.980994  161014 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1009 19:04:56.980999  161014 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1009 19:04:56.981005  161014 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1009 19:04:56.981010  161014 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1009 19:04:56.981014  161014 command_runner.go:130] > #   in $PATH.
	I1009 19:04:56.981020  161014 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1009 19:04:56.981024  161014 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1009 19:04:56.981032  161014 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1009 19:04:56.981035  161014 command_runner.go:130] > #   state.
	I1009 19:04:56.981041  161014 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1009 19:04:56.981049  161014 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1009 19:04:56.981054  161014 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1009 19:04:56.981063  161014 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1009 19:04:56.981067  161014 command_runner.go:130] > #   the values from the default runtime on load time.
	I1009 19:04:56.981078  161014 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1009 19:04:56.981086  161014 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1009 19:04:56.981092  161014 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1009 19:04:56.981100  161014 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1009 19:04:56.981105  161014 command_runner.go:130] > #   The currently recognized values are:
	I1009 19:04:56.981113  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1009 19:04:56.981123  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1009 19:04:56.981130  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1009 19:04:56.981135  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1009 19:04:56.981144  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1009 19:04:56.981153  161014 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1009 19:04:56.981161  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1009 19:04:56.981169  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1009 19:04:56.981177  161014 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1009 19:04:56.981183  161014 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1009 19:04:56.981191  161014 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1009 19:04:56.981199  161014 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1009 19:04:56.981204  161014 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1009 19:04:56.981213  161014 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1009 19:04:56.981221  161014 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1009 19:04:56.981227  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1009 19:04:56.981235  161014 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1009 19:04:56.981239  161014 command_runner.go:130] > #   deprecated option "conmon".
	I1009 19:04:56.981248  161014 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1009 19:04:56.981255  161014 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1009 19:04:56.981261  161014 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1009 19:04:56.981268  161014 command_runner.go:130] > #   should be moved to the container's cgroup
	I1009 19:04:56.981273  161014 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1009 19:04:56.981280  161014 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1009 19:04:56.981287  161014 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1009 19:04:56.981293  161014 command_runner.go:130] > #   conmon-rs by using:
	I1009 19:04:56.981300  161014 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1009 19:04:56.981309  161014 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1009 19:04:56.981318  161014 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1009 19:04:56.981326  161014 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1009 19:04:56.981334  161014 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1009 19:04:56.981341  161014 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1009 19:04:56.981351  161014 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1009 19:04:56.981359  161014 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1009 19:04:56.981370  161014 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1009 19:04:56.981395  161014 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1009 19:04:56.981405  161014 command_runner.go:130] > #   when a machine crash happens.
	I1009 19:04:56.981411  161014 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1009 19:04:56.981421  161014 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1009 19:04:56.981431  161014 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1009 19:04:56.981437  161014 command_runner.go:130] > #   seccomp profile for the runtime.
	I1009 19:04:56.981443  161014 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1009 19:04:56.981452  161014 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1009 19:04:56.981455  161014 command_runner.go:130] > #
	I1009 19:04:56.981460  161014 command_runner.go:130] > # Using the seccomp notifier feature:
	I1009 19:04:56.981465  161014 command_runner.go:130] > #
	I1009 19:04:56.981472  161014 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1009 19:04:56.981480  161014 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1009 19:04:56.981483  161014 command_runner.go:130] > #
	I1009 19:04:56.981490  161014 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1009 19:04:56.981498  161014 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1009 19:04:56.981501  161014 command_runner.go:130] > #
	I1009 19:04:56.981507  161014 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1009 19:04:56.981512  161014 command_runner.go:130] > # feature.
	I1009 19:04:56.981515  161014 command_runner.go:130] > #
	I1009 19:04:56.981537  161014 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1009 19:04:56.981545  161014 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1009 19:04:56.981553  161014 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1009 19:04:56.981562  161014 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1009 19:04:56.981568  161014 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1009 19:04:56.981573  161014 command_runner.go:130] > #
	I1009 19:04:56.981579  161014 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1009 19:04:56.981587  161014 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1009 19:04:56.981590  161014 command_runner.go:130] > #
	I1009 19:04:56.981598  161014 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1009 19:04:56.981603  161014 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1009 19:04:56.981608  161014 command_runner.go:130] > #
	I1009 19:04:56.981614  161014 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1009 19:04:56.981622  161014 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1009 19:04:56.981628  161014 command_runner.go:130] > # limitation.
	I1009 19:04:56.981632  161014 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1009 19:04:56.981639  161014 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1009 19:04:56.981642  161014 command_runner.go:130] > runtime_type = ""
	I1009 19:04:56.981648  161014 command_runner.go:130] > runtime_root = "/run/crun"
	I1009 19:04:56.981652  161014 command_runner.go:130] > inherit_default_runtime = false
	I1009 19:04:56.981657  161014 command_runner.go:130] > runtime_config_path = ""
	I1009 19:04:56.981663  161014 command_runner.go:130] > container_min_memory = ""
	I1009 19:04:56.981667  161014 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 19:04:56.981673  161014 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 19:04:56.981677  161014 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 19:04:56.981683  161014 command_runner.go:130] > allowed_annotations = [
	I1009 19:04:56.981687  161014 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1009 19:04:56.981694  161014 command_runner.go:130] > ]
	I1009 19:04:56.981699  161014 command_runner.go:130] > privileged_without_host_devices = false
	I1009 19:04:56.981705  161014 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1009 19:04:56.981709  161014 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1009 19:04:56.981715  161014 command_runner.go:130] > runtime_type = ""
	I1009 19:04:56.981719  161014 command_runner.go:130] > runtime_root = "/run/runc"
	I1009 19:04:56.981725  161014 command_runner.go:130] > inherit_default_runtime = false
	I1009 19:04:56.981729  161014 command_runner.go:130] > runtime_config_path = ""
	I1009 19:04:56.981735  161014 command_runner.go:130] > container_min_memory = ""
	I1009 19:04:56.981739  161014 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 19:04:56.981744  161014 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 19:04:56.981750  161014 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 19:04:56.981754  161014 command_runner.go:130] > privileged_without_host_devices = false
	I1009 19:04:56.981761  161014 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1009 19:04:56.981769  161014 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1009 19:04:56.981774  161014 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1009 19:04:56.981783  161014 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1009 19:04:56.981795  161014 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1009 19:04:56.981807  161014 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1009 19:04:56.981815  161014 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1009 19:04:56.981823  161014 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1009 19:04:56.981831  161014 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1009 19:04:56.981840  161014 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1009 19:04:56.981848  161014 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1009 19:04:56.981854  161014 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1009 19:04:56.981859  161014 command_runner.go:130] > # Example:
	I1009 19:04:56.981864  161014 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1009 19:04:56.981871  161014 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1009 19:04:56.981875  161014 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1009 19:04:56.981884  161014 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1009 19:04:56.981899  161014 command_runner.go:130] > # cpuset = "0-1"
	I1009 19:04:56.981905  161014 command_runner.go:130] > # cpushares = "5"
	I1009 19:04:56.981909  161014 command_runner.go:130] > # cpuquota = "1000"
	I1009 19:04:56.981912  161014 command_runner.go:130] > # cpuperiod = "100000"
	I1009 19:04:56.981920  161014 command_runner.go:130] > # cpulimit = "35"
	I1009 19:04:56.981926  161014 command_runner.go:130] > # Where:
	I1009 19:04:56.981936  161014 command_runner.go:130] > # The workload name is workload-type.
	I1009 19:04:56.981948  161014 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1009 19:04:56.981955  161014 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1009 19:04:56.981962  161014 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1009 19:04:56.981971  161014 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1009 19:04:56.981979  161014 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1009 19:04:56.981984  161014 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1009 19:04:56.981993  161014 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1009 19:04:56.981997  161014 command_runner.go:130] > # Default value is set to true
	I1009 19:04:56.982003  161014 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1009 19:04:56.982009  161014 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1009 19:04:56.982013  161014 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1009 19:04:56.982017  161014 command_runner.go:130] > # Default value is set to 'false'
	I1009 19:04:56.982020  161014 command_runner.go:130] > # disable_hostport_mapping = false
	I1009 19:04:56.982025  161014 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1009 19:04:56.982034  161014 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1009 19:04:56.982039  161014 command_runner.go:130] > # timezone = ""
	I1009 19:04:56.982045  161014 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1009 19:04:56.982050  161014 command_runner.go:130] > #
	I1009 19:04:56.982056  161014 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1009 19:04:56.982064  161014 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1009 19:04:56.982067  161014 command_runner.go:130] > [crio.image]
	I1009 19:04:56.982072  161014 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1009 19:04:56.982080  161014 command_runner.go:130] > # default_transport = "docker://"
	I1009 19:04:56.982085  161014 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1009 19:04:56.982093  161014 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1009 19:04:56.982100  161014 command_runner.go:130] > # global_auth_file = ""
	I1009 19:04:56.982105  161014 command_runner.go:130] > # The image used to instantiate infra containers.
	I1009 19:04:56.982112  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.982116  161014 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1009 19:04:56.982124  161014 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1009 19:04:56.982132  161014 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1009 19:04:56.982137  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.982143  161014 command_runner.go:130] > # pause_image_auth_file = ""
	I1009 19:04:56.982148  161014 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1009 19:04:56.982156  161014 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1009 19:04:56.982162  161014 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1009 19:04:56.982170  161014 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1009 19:04:56.982173  161014 command_runner.go:130] > # pause_command = "/pause"
	I1009 19:04:56.982178  161014 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1009 19:04:56.982186  161014 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1009 19:04:56.982191  161014 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1009 19:04:56.982199  161014 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1009 19:04:56.982204  161014 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1009 19:04:56.982213  161014 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1009 19:04:56.982219  161014 command_runner.go:130] > # pinned_images = [
	I1009 19:04:56.982222  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982227  161014 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1009 19:04:56.982235  161014 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1009 19:04:56.982241  161014 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1009 19:04:56.982248  161014 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1009 19:04:56.982253  161014 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1009 19:04:56.982260  161014 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1009 19:04:56.982265  161014 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1009 19:04:56.982274  161014 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1009 19:04:56.982282  161014 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1009 19:04:56.982287  161014 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1009 19:04:56.982295  161014 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1009 19:04:56.982302  161014 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1009 19:04:56.982307  161014 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1009 19:04:56.982316  161014 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1009 19:04:56.982322  161014 command_runner.go:130] > # changing them here.
	I1009 19:04:56.982327  161014 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1009 19:04:56.982333  161014 command_runner.go:130] > # insecure_registries = [
	I1009 19:04:56.982336  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982342  161014 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1009 19:04:56.982352  161014 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1009 19:04:56.982359  161014 command_runner.go:130] > # image_volumes = "mkdir"
	I1009 19:04:56.982364  161014 command_runner.go:130] > # Temporary directory to use for storing big files
	I1009 19:04:56.982370  161014 command_runner.go:130] > # big_files_temporary_dir = ""
	I1009 19:04:56.982385  161014 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1009 19:04:56.982398  161014 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1009 19:04:56.982403  161014 command_runner.go:130] > # auto_reload_registries = false
	I1009 19:04:56.982412  161014 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1009 19:04:56.982419  161014 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1009 19:04:56.982427  161014 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1009 19:04:56.982431  161014 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1009 19:04:56.982435  161014 command_runner.go:130] > # The mode of short name resolution.
	I1009 19:04:56.982441  161014 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1009 19:04:56.982450  161014 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1009 19:04:56.982455  161014 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1009 19:04:56.982460  161014 command_runner.go:130] > # short_name_mode = "enforcing"
	I1009 19:04:56.982465  161014 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1009 19:04:56.982472  161014 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1009 19:04:56.982476  161014 command_runner.go:130] > # oci_artifact_mount_support = true
	I1009 19:04:56.982484  161014 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1009 19:04:56.982487  161014 command_runner.go:130] > # CNI plugins.
	I1009 19:04:56.982490  161014 command_runner.go:130] > [crio.network]
	I1009 19:04:56.982496  161014 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1009 19:04:56.982501  161014 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1009 19:04:56.982507  161014 command_runner.go:130] > # cni_default_network = ""
	I1009 19:04:56.982512  161014 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1009 19:04:56.982519  161014 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1009 19:04:56.982524  161014 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1009 19:04:56.982530  161014 command_runner.go:130] > # plugin_dirs = [
	I1009 19:04:56.982533  161014 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1009 19:04:56.982536  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982540  161014 command_runner.go:130] > # List of included pod metrics.
	I1009 19:04:56.982544  161014 command_runner.go:130] > # included_pod_metrics = [
	I1009 19:04:56.982547  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982552  161014 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1009 19:04:56.982558  161014 command_runner.go:130] > [crio.metrics]
	I1009 19:04:56.982562  161014 command_runner.go:130] > # Globally enable or disable metrics support.
	I1009 19:04:56.982566  161014 command_runner.go:130] > # enable_metrics = false
	I1009 19:04:56.982570  161014 command_runner.go:130] > # Specify enabled metrics collectors.
	I1009 19:04:56.982574  161014 command_runner.go:130] > # Per default all metrics are enabled.
	I1009 19:04:56.982579  161014 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1009 19:04:56.982588  161014 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1009 19:04:56.982593  161014 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1009 19:04:56.982598  161014 command_runner.go:130] > # metrics_collectors = [
	I1009 19:04:56.982602  161014 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1009 19:04:56.982607  161014 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1009 19:04:56.982610  161014 command_runner.go:130] > # 	"containers_oom_total",
	I1009 19:04:56.982614  161014 command_runner.go:130] > # 	"processes_defunct",
	I1009 19:04:56.982617  161014 command_runner.go:130] > # 	"operations_total",
	I1009 19:04:56.982621  161014 command_runner.go:130] > # 	"operations_latency_seconds",
	I1009 19:04:56.982625  161014 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1009 19:04:56.982629  161014 command_runner.go:130] > # 	"operations_errors_total",
	I1009 19:04:56.982632  161014 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1009 19:04:56.982636  161014 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1009 19:04:56.982640  161014 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1009 19:04:56.982643  161014 command_runner.go:130] > # 	"image_pulls_success_total",
	I1009 19:04:56.982648  161014 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1009 19:04:56.982652  161014 command_runner.go:130] > # 	"containers_oom_count_total",
	I1009 19:04:56.982656  161014 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1009 19:04:56.982660  161014 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1009 19:04:56.982664  161014 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1009 19:04:56.982667  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982672  161014 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1009 19:04:56.982675  161014 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1009 19:04:56.982680  161014 command_runner.go:130] > # The port on which the metrics server will listen.
	I1009 19:04:56.982683  161014 command_runner.go:130] > # metrics_port = 9090
	I1009 19:04:56.982689  161014 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1009 19:04:56.982693  161014 command_runner.go:130] > # metrics_socket = ""
	I1009 19:04:56.982698  161014 command_runner.go:130] > # The certificate for the secure metrics server.
	I1009 19:04:56.982706  161014 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1009 19:04:56.982712  161014 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1009 19:04:56.982718  161014 command_runner.go:130] > # certificate on any modification event.
	I1009 19:04:56.982722  161014 command_runner.go:130] > # metrics_cert = ""
	I1009 19:04:56.982735  161014 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1009 19:04:56.982741  161014 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1009 19:04:56.982746  161014 command_runner.go:130] > # metrics_key = ""
	I1009 19:04:56.982753  161014 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1009 19:04:56.982758  161014 command_runner.go:130] > [crio.tracing]
	I1009 19:04:56.982766  161014 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1009 19:04:56.982771  161014 command_runner.go:130] > # enable_tracing = false
	I1009 19:04:56.982779  161014 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1009 19:04:56.982788  161014 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1009 19:04:56.982798  161014 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1009 19:04:56.982809  161014 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1009 19:04:56.982818  161014 command_runner.go:130] > # CRI-O NRI configuration.
	I1009 19:04:56.982821  161014 command_runner.go:130] > [crio.nri]
	I1009 19:04:56.982825  161014 command_runner.go:130] > # Globally enable or disable NRI.
	I1009 19:04:56.982832  161014 command_runner.go:130] > # enable_nri = true
	I1009 19:04:56.982836  161014 command_runner.go:130] > # NRI socket to listen on.
	I1009 19:04:56.982842  161014 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1009 19:04:56.982846  161014 command_runner.go:130] > # NRI plugin directory to use.
	I1009 19:04:56.982851  161014 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1009 19:04:56.982856  161014 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1009 19:04:56.982863  161014 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1009 19:04:56.982868  161014 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1009 19:04:56.982900  161014 command_runner.go:130] > # nri_disable_connections = false
	I1009 19:04:56.982908  161014 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1009 19:04:56.982912  161014 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1009 19:04:56.982916  161014 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1009 19:04:56.982920  161014 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1009 19:04:56.982926  161014 command_runner.go:130] > # NRI default validator configuration.
	I1009 19:04:56.982933  161014 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1009 19:04:56.982946  161014 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1009 19:04:56.982953  161014 command_runner.go:130] > # can be restricted/rejected:
	I1009 19:04:56.982956  161014 command_runner.go:130] > # - OCI hook injection
	I1009 19:04:56.982961  161014 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1009 19:04:56.982969  161014 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1009 19:04:56.982974  161014 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1009 19:04:56.982982  161014 command_runner.go:130] > # - adjustment of linux namespaces
	I1009 19:04:56.982988  161014 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1009 19:04:56.982996  161014 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1009 19:04:56.983002  161014 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1009 19:04:56.983007  161014 command_runner.go:130] > #
	I1009 19:04:56.983011  161014 command_runner.go:130] > # [crio.nri.default_validator]
	I1009 19:04:56.983015  161014 command_runner.go:130] > # nri_enable_default_validator = false
	I1009 19:04:56.983020  161014 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1009 19:04:56.983027  161014 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1009 19:04:56.983032  161014 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1009 19:04:56.983039  161014 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1009 19:04:56.983044  161014 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1009 19:04:56.983050  161014 command_runner.go:130] > # nri_validator_required_plugins = [
	I1009 19:04:56.983053  161014 command_runner.go:130] > # ]
	I1009 19:04:56.983058  161014 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1009 19:04:56.983066  161014 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1009 19:04:56.983069  161014 command_runner.go:130] > [crio.stats]
	I1009 19:04:56.983074  161014 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1009 19:04:56.983087  161014 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1009 19:04:56.983092  161014 command_runner.go:130] > # stats_collection_period = 0
	I1009 19:04:56.983097  161014 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1009 19:04:56.983106  161014 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1009 19:04:56.983109  161014 command_runner.go:130] > # collection_period = 0
	I1009 19:04:56.983133  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961902946Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1009 19:04:56.983143  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961928249Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1009 19:04:56.983151  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961952575Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1009 19:04:56.983160  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961969788Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1009 19:04:56.983168  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.962036562Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.983178  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.96221376Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1009 19:04:56.983187  161014 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1009 19:04:56.983250  161014 cni.go:84] Creating CNI manager for ""
	I1009 19:04:56.983259  161014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:04:56.983280  161014 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:04:56.983306  161014 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-158523 NodeName:functional-158523 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:04:56.983442  161014 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-158523"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:04:56.983504  161014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:04:56.992256  161014 command_runner.go:130] > kubeadm
	I1009 19:04:56.992278  161014 command_runner.go:130] > kubectl
	I1009 19:04:56.992282  161014 command_runner.go:130] > kubelet
	I1009 19:04:56.992304  161014 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:04:56.992347  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:04:57.000522  161014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 19:04:57.013113  161014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:04:57.026211  161014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1009 19:04:57.038776  161014 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:04:57.042573  161014 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1009 19:04:57.042649  161014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:04:57.130268  161014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:04:57.143785  161014 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523 for IP: 192.168.49.2
	I1009 19:04:57.143808  161014 certs.go:195] generating shared ca certs ...
	I1009 19:04:57.143829  161014 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.144031  161014 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:04:57.144072  161014 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:04:57.144082  161014 certs.go:257] generating profile certs ...
	I1009 19:04:57.144182  161014 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key
	I1009 19:04:57.144224  161014 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key.1809350a
	I1009 19:04:57.144260  161014 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key
	I1009 19:04:57.144272  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:04:57.144283  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:04:57.144293  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:04:57.144302  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:04:57.144314  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:04:57.144325  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:04:57.144336  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:04:57.144348  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:04:57.144426  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:04:57.144461  161014 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:04:57.144470  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:04:57.144493  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:04:57.144516  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:04:57.144537  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:04:57.144579  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:04:57.144605  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.144619  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.144631  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.145144  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:04:57.163977  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:04:57.182180  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:04:57.200741  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:04:57.219086  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:04:57.236775  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:04:57.254529  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:04:57.272276  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:04:57.290804  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:04:57.309893  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:04:57.327963  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:04:57.345810  161014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:04:57.359185  161014 ssh_runner.go:195] Run: openssl version
	I1009 19:04:57.366137  161014 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1009 19:04:57.366338  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:04:57.375985  161014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.380041  161014 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.380082  161014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.380117  161014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.415315  161014 command_runner.go:130] > b5213941
	I1009 19:04:57.415413  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:04:57.424315  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:04:57.433300  161014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.437553  161014 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.437594  161014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.437635  161014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.472859  161014 command_runner.go:130] > 51391683
	I1009 19:04:57.473177  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:04:57.481800  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:04:57.490997  161014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.494992  161014 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.495040  161014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.495095  161014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.529155  161014 command_runner.go:130] > 3ec20f2e
	I1009 19:04:57.529240  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:04:57.537710  161014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:04:57.541624  161014 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:04:57.541645  161014 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1009 19:04:57.541653  161014 command_runner.go:130] > Device: 8,1	Inode: 573939      Links: 1
	I1009 19:04:57.541662  161014 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 19:04:57.541679  161014 command_runner.go:130] > Access: 2025-10-09 19:00:49.271404553 +0000
	I1009 19:04:57.541690  161014 command_runner.go:130] > Modify: 2025-10-09 18:56:44.405307509 +0000
	I1009 19:04:57.541704  161014 command_runner.go:130] > Change: 2025-10-09 18:56:44.405307509 +0000
	I1009 19:04:57.541714  161014 command_runner.go:130] >  Birth: 2025-10-09 18:56:44.405307509 +0000
	I1009 19:04:57.541773  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:04:57.576034  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.576418  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:04:57.610746  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.611106  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:04:57.645558  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.645650  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:04:57.680926  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.681269  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:04:57.716681  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.716965  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:04:57.752444  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.752733  161014 kubeadm.go:400] StartCluster: {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:04:57.752827  161014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:04:57.752877  161014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:04:57.781930  161014 cri.go:89] found id: ""
	I1009 19:04:57.782002  161014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:04:57.790396  161014 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1009 19:04:57.790421  161014 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1009 19:04:57.790427  161014 command_runner.go:130] > /var/lib/minikube/etcd:
	I1009 19:04:57.790446  161014 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:04:57.790453  161014 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:04:57.790499  161014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:04:57.798150  161014 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:04:57.798252  161014 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-158523" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.798307  161014 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-137890/kubeconfig needs updating (will repair): [kubeconfig missing "functional-158523" cluster setting kubeconfig missing "functional-158523" context setting]
	I1009 19:04:57.798648  161014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.799428  161014 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.799625  161014 kapi.go:59] client config for functional-158523: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:04:57.800169  161014 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:04:57.800185  161014 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:04:57.800191  161014 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:04:57.800195  161014 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:04:57.800199  161014 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:04:57.800257  161014 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:04:57.800663  161014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:04:57.808677  161014 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:04:57.808712  161014 kubeadm.go:601] duration metric: took 18.25382ms to restartPrimaryControlPlane
	I1009 19:04:57.808720  161014 kubeadm.go:402] duration metric: took 56.001565ms to StartCluster
	I1009 19:04:57.808736  161014 settings.go:142] acquiring lock: {Name:mk34466033fb866bc8e167d2d953624ad0802283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.808837  161014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.809418  161014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.809652  161014 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:04:57.809720  161014 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:04:57.809869  161014 addons.go:69] Setting storage-provisioner=true in profile "functional-158523"
	I1009 19:04:57.809882  161014 addons.go:69] Setting default-storageclass=true in profile "functional-158523"
	I1009 19:04:57.809890  161014 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:57.809907  161014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-158523"
	I1009 19:04:57.809888  161014 addons.go:238] Setting addon storage-provisioner=true in "functional-158523"
	I1009 19:04:57.809999  161014 host.go:66] Checking if "functional-158523" exists ...
	I1009 19:04:57.810265  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:57.810325  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:57.815899  161014 out.go:179] * Verifying Kubernetes components...
	I1009 19:04:57.817259  161014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:04:57.830319  161014 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.830565  161014 kapi.go:59] client config for functional-158523: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:04:57.830893  161014 addons.go:238] Setting addon default-storageclass=true in "functional-158523"
	I1009 19:04:57.830936  161014 host.go:66] Checking if "functional-158523" exists ...
	I1009 19:04:57.831444  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:57.831697  161014 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:04:57.833512  161014 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:57.833530  161014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:04:57.833580  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:57.856284  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:57.858504  161014 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:57.858545  161014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:04:57.858618  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:57.879618  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:57.916522  161014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:04:57.930660  161014 node_ready.go:35] waiting up to 6m0s for node "functional-158523" to be "Ready" ...
	I1009 19:04:57.930861  161014 type.go:168] "Request Body" body=""
	I1009 19:04:57.930944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:57.931232  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:57.969596  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:57.988544  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:58.026986  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.027037  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.027061  161014 retry.go:31] will retry after 164.488016ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.047051  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.047098  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.047116  161014 retry.go:31] will retry after 194.483244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.192480  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:58.242329  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:58.247629  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.247684  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.247711  161014 retry.go:31] will retry after 217.861079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.297775  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.297841  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.297866  161014 retry.go:31] will retry after 198.924996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.431075  161014 type.go:168] "Request Body" body=""
	I1009 19:04:58.431155  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:58.431537  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:58.466794  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:58.497509  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:58.521187  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.524476  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.524506  161014 retry.go:31] will retry after 579.961825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.549062  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.552103  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.552134  161014 retry.go:31] will retry after 574.521259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.930944  161014 type.go:168] "Request Body" body=""
	I1009 19:04:58.931032  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:58.931452  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:59.104703  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:59.127368  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:59.161080  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:59.161136  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.161156  161014 retry.go:31] will retry after 734.839127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.184025  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:59.184076  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.184098  161014 retry.go:31] will retry after 1.025268007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.431572  161014 type.go:168] "Request Body" body=""
	I1009 19:04:59.431684  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:59.432074  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:59.896539  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:59.931433  161014 type.go:168] "Request Body" body=""
	I1009 19:04:59.931506  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:59.931837  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:04:59.931910  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:04:59.949186  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:59.952452  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.952481  161014 retry.go:31] will retry after 1.084602838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:00.209882  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:00.262148  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:00.265292  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:00.265336  161014 retry.go:31] will retry after 1.287073207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:00.431721  161014 type.go:168] "Request Body" body=""
	I1009 19:05:00.431804  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:00.432145  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:00.931797  161014 type.go:168] "Request Body" body=""
	I1009 19:05:00.931880  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:00.932240  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:01.037525  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:01.094236  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:01.094283  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.094304  161014 retry.go:31] will retry after 1.546934371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.431777  161014 type.go:168] "Request Body" body=""
	I1009 19:05:01.431854  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:01.432251  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:01.553547  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:01.609996  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:01.610065  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.610089  161014 retry.go:31] will retry after 1.923829662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.931551  161014 type.go:168] "Request Body" body=""
	I1009 19:05:01.931629  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:01.931969  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:01.932040  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:02.431907  161014 type.go:168] "Request Body" body=""
	I1009 19:05:02.431987  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:02.432358  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:02.641614  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:02.696762  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:02.699844  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:02.699873  161014 retry.go:31] will retry after 2.36633365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:02.931226  161014 type.go:168] "Request Body" body=""
	I1009 19:05:02.931306  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:02.931737  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:03.431598  161014 type.go:168] "Request Body" body=""
	I1009 19:05:03.431682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:03.432054  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:03.534329  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:03.590565  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:03.590611  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:03.590631  161014 retry.go:31] will retry after 1.952860092s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:03.931329  161014 type.go:168] "Request Body" body=""
	I1009 19:05:03.931427  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:03.931807  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:04.431531  161014 type.go:168] "Request Body" body=""
	I1009 19:05:04.431620  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:04.432004  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:04.432087  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:04.931914  161014 type.go:168] "Request Body" body=""
	I1009 19:05:04.931993  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:04.932341  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:05.066624  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:05.119719  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:05.123044  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.123086  161014 retry.go:31] will retry after 6.108852521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.431602  161014 type.go:168] "Request Body" body=""
	I1009 19:05:05.431719  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:05.432158  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:05.544481  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:05.597312  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:05.600803  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.600837  161014 retry.go:31] will retry after 3.364758217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.931296  161014 type.go:168] "Request Body" body=""
	I1009 19:05:05.931418  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:05.931808  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:06.431397  161014 type.go:168] "Request Body" body=""
	I1009 19:05:06.431479  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:06.431873  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:06.931533  161014 type.go:168] "Request Body" body=""
	I1009 19:05:06.931626  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:06.932024  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:06.932104  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:07.431687  161014 type.go:168] "Request Body" body=""
	I1009 19:05:07.431779  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:07.432140  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:07.930948  161014 type.go:168] "Request Body" body=""
	I1009 19:05:07.931031  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:07.931436  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:08.431020  161014 type.go:168] "Request Body" body=""
	I1009 19:05:08.431105  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:08.431489  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:08.931423  161014 type.go:168] "Request Body" body=""
	I1009 19:05:08.931528  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:08.931995  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:08.966195  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:09.019582  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:09.022605  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:09.022645  161014 retry.go:31] will retry after 7.771885559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:09.431187  161014 type.go:168] "Request Body" body=""
	I1009 19:05:09.431265  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:09.431662  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:09.431745  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:09.931552  161014 type.go:168] "Request Body" body=""
	I1009 19:05:09.931635  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:09.931979  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:10.431855  161014 type.go:168] "Request Body" body=""
	I1009 19:05:10.431945  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:10.432274  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:10.931110  161014 type.go:168] "Request Body" body=""
	I1009 19:05:10.931203  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:10.931572  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:11.233030  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:11.288902  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:11.288953  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:11.288975  161014 retry.go:31] will retry after 3.345246752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:11.431308  161014 type.go:168] "Request Body" body=""
	I1009 19:05:11.431402  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:11.431749  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:11.431819  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:11.931642  161014 type.go:168] "Request Body" body=""
	I1009 19:05:11.931749  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:11.932113  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:12.430947  161014 type.go:168] "Request Body" body=""
	I1009 19:05:12.431034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:12.431445  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:12.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:05:12.931315  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:12.931711  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:13.431639  161014 type.go:168] "Request Body" body=""
	I1009 19:05:13.431724  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:13.432088  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:13.432151  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:13.930962  161014 type.go:168] "Request Body" body=""
	I1009 19:05:13.931048  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:13.931440  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:14.431210  161014 type.go:168] "Request Body" body=""
	I1009 19:05:14.431296  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:14.431700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:14.635101  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:14.689463  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:14.692943  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:14.692988  161014 retry.go:31] will retry after 8.426490786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:14.931454  161014 type.go:168] "Request Body" body=""
	I1009 19:05:14.931531  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:14.931912  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:15.431649  161014 type.go:168] "Request Body" body=""
	I1009 19:05:15.431767  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:15.432139  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:15.432244  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:15.931808  161014 type.go:168] "Request Body" body=""
	I1009 19:05:15.931885  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:15.932226  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:16.430935  161014 type.go:168] "Request Body" body=""
	I1009 19:05:16.431026  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:16.431417  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:16.794854  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:16.849041  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:16.852200  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:16.852234  161014 retry.go:31] will retry after 11.902123756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:16.931535  161014 type.go:168] "Request Body" body=""
	I1009 19:05:16.931634  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:16.931986  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:17.431870  161014 type.go:168] "Request Body" body=""
	I1009 19:05:17.431977  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:17.432410  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:17.432479  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:17.931194  161014 type.go:168] "Request Body" body=""
	I1009 19:05:17.931301  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:17.931659  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:18.431314  161014 type.go:168] "Request Body" body=""
	I1009 19:05:18.431420  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:18.431851  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:18.931802  161014 type.go:168] "Request Body" body=""
	I1009 19:05:18.931891  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:18.932247  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:19.431889  161014 type.go:168] "Request Body" body=""
	I1009 19:05:19.431978  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:19.432365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:19.930982  161014 type.go:168] "Request Body" body=""
	I1009 19:05:19.931070  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:19.931472  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:19.931543  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:20.431080  161014 type.go:168] "Request Body" body=""
	I1009 19:05:20.431159  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:20.431505  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:20.931084  161014 type.go:168] "Request Body" body=""
	I1009 19:05:20.931162  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:20.931465  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:21.431126  161014 type.go:168] "Request Body" body=""
	I1009 19:05:21.431210  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:21.431583  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:21.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:05:21.931310  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:21.931673  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:21.931757  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:22.431250  161014 type.go:168] "Request Body" body=""
	I1009 19:05:22.431335  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:22.431706  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:22.931281  161014 type.go:168] "Request Body" body=""
	I1009 19:05:22.931373  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:22.931764  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:23.120080  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:23.178288  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:23.178344  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:23.178369  161014 retry.go:31] will retry after 12.554942652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:23.431791  161014 type.go:168] "Request Body" body=""
	I1009 19:05:23.431875  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:23.432227  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:23.931667  161014 type.go:168] "Request Body" body=""
	I1009 19:05:23.931758  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:23.932103  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:23.932167  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:24.431140  161014 type.go:168] "Request Body" body=""
	I1009 19:05:24.431223  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:24.431607  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:24.931219  161014 type.go:168] "Request Body" body=""
	I1009 19:05:24.931297  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:24.931656  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:25.431282  161014 type.go:168] "Request Body" body=""
	I1009 19:05:25.431369  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:25.431754  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:25.931371  161014 type.go:168] "Request Body" body=""
	I1009 19:05:25.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:25.931852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:26.431721  161014 type.go:168] "Request Body" body=""
	I1009 19:05:26.431805  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:26.432173  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:26.432243  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:26.931895  161014 type.go:168] "Request Body" body=""
	I1009 19:05:26.931982  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:26.932327  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:27.430978  161014 type.go:168] "Request Body" body=""
	I1009 19:05:27.431069  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:27.431440  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:27.931122  161014 type.go:168] "Request Body" body=""
	I1009 19:05:27.931203  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:27.931568  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:28.431156  161014 type.go:168] "Request Body" body=""
	I1009 19:05:28.431240  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:28.431629  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:28.755128  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:28.809181  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:28.812331  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:28.812369  161014 retry.go:31] will retry after 17.899546939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:28.931943  161014 type.go:168] "Request Body" body=""
	I1009 19:05:28.932042  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:28.932423  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:28.932495  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:29.431031  161014 type.go:168] "Request Body" body=""
	I1009 19:05:29.431117  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:29.431488  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:29.931112  161014 type.go:168] "Request Body" body=""
	I1009 19:05:29.931191  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:29.931549  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:30.431108  161014 type.go:168] "Request Body" body=""
	I1009 19:05:30.431184  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:30.431580  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:30.931234  161014 type.go:168] "Request Body" body=""
	I1009 19:05:30.931310  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:30.931701  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:31.431358  161014 type.go:168] "Request Body" body=""
	I1009 19:05:31.431464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:31.431883  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:31.431968  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:31.931551  161014 type.go:168] "Request Body" body=""
	I1009 19:05:31.931654  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:31.932150  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:32.431843  161014 type.go:168] "Request Body" body=""
	I1009 19:05:32.431928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:32.432325  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:32.930923  161014 type.go:168] "Request Body" body=""
	I1009 19:05:32.931009  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:32.931419  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:33.431049  161014 type.go:168] "Request Body" body=""
	I1009 19:05:33.431139  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:33.431539  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:33.931442  161014 type.go:168] "Request Body" body=""
	I1009 19:05:33.931529  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:33.931921  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:33.931994  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:34.431615  161014 type.go:168] "Request Body" body=""
	I1009 19:05:34.431709  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:34.432084  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:34.931772  161014 type.go:168] "Request Body" body=""
	I1009 19:05:34.931853  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:34.932239  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:35.431990  161014 type.go:168] "Request Body" body=""
	I1009 19:05:35.432083  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:35.432473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:35.733912  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:35.787306  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:35.790843  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:35.790879  161014 retry.go:31] will retry after 31.721699669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:35.931334  161014 type.go:168] "Request Body" body=""
	I1009 19:05:35.931474  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:35.931860  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:36.431788  161014 type.go:168] "Request Body" body=""
	I1009 19:05:36.431878  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:36.432242  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:36.432309  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:36.931065  161014 type.go:168] "Request Body" body=""
	I1009 19:05:36.931156  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:36.931540  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:37.431314  161014 type.go:168] "Request Body" body=""
	I1009 19:05:37.431439  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:37.431797  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:37.931603  161014 type.go:168] "Request Body" body=""
	I1009 19:05:37.931697  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:37.932081  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:38.431682  161014 type.go:168] "Request Body" body=""
	I1009 19:05:38.431775  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:38.432127  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:38.930953  161014 type.go:168] "Request Body" body=""
	I1009 19:05:38.931049  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:38.931414  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:38.931498  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:39.430956  161014 type.go:168] "Request Body" body=""
	I1009 19:05:39.431070  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:39.431453  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:39.931034  161014 type.go:168] "Request Body" body=""
	I1009 19:05:39.931145  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:39.931490  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:40.431075  161014 type.go:168] "Request Body" body=""
	I1009 19:05:40.431166  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:40.431582  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:40.931249  161014 type.go:168] "Request Body" body=""
	I1009 19:05:40.931336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:40.931693  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:40.931756  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:41.431331  161014 type.go:168] "Request Body" body=""
	I1009 19:05:41.431437  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:41.431805  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:41.931445  161014 type.go:168] "Request Body" body=""
	I1009 19:05:41.931535  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:41.931928  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:42.431625  161014 type.go:168] "Request Body" body=""
	I1009 19:05:42.431705  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:42.432064  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:42.931717  161014 type.go:168] "Request Body" body=""
	I1009 19:05:42.931803  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:42.932175  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:42.932247  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:43.430857  161014 type.go:168] "Request Body" body=""
	I1009 19:05:43.430971  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:43.431317  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:43.931149  161014 type.go:168] "Request Body" body=""
	I1009 19:05:43.931232  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:43.931588  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:44.431181  161014 type.go:168] "Request Body" body=""
	I1009 19:05:44.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:44.431639  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:44.931222  161014 type.go:168] "Request Body" body=""
	I1009 19:05:44.931306  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:44.931692  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:45.431277  161014 type.go:168] "Request Body" body=""
	I1009 19:05:45.431360  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:45.431736  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:45.431802  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:45.931357  161014 type.go:168] "Request Body" body=""
	I1009 19:05:45.931462  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:45.931838  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:46.431506  161014 type.go:168] "Request Body" body=""
	I1009 19:05:46.431593  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:46.431956  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:46.712449  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:46.768626  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:46.768679  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:46.768704  161014 retry.go:31] will retry after 25.41172348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:46.930938  161014 type.go:168] "Request Body" body=""
	I1009 19:05:46.931055  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:46.931460  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:47.431076  161014 type.go:168] "Request Body" body=""
	I1009 19:05:47.431153  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:47.431556  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:47.931415  161014 type.go:168] "Request Body" body=""
	I1009 19:05:47.931510  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:47.931879  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:47.931959  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:48.431674  161014 type.go:168] "Request Body" body=""
	I1009 19:05:48.431759  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:48.432094  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:48.930896  161014 type.go:168] "Request Body" body=""
	I1009 19:05:48.931001  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:48.931373  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:49.430996  161014 type.go:168] "Request Body" body=""
	I1009 19:05:49.431076  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:49.431465  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:49.931262  161014 type.go:168] "Request Body" body=""
	I1009 19:05:49.931370  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:49.931789  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:50.431699  161014 type.go:168] "Request Body" body=""
	I1009 19:05:50.431782  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:50.432126  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:50.432204  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:50.930957  161014 type.go:168] "Request Body" body=""
	I1009 19:05:50.931084  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:50.931482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:51.431250  161014 type.go:168] "Request Body" body=""
	I1009 19:05:51.431347  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:51.431706  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:51.931601  161014 type.go:168] "Request Body" body=""
	I1009 19:05:51.931698  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:51.932063  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:52.430862  161014 type.go:168] "Request Body" body=""
	I1009 19:05:52.430953  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:52.431298  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:52.931085  161014 type.go:168] "Request Body" body=""
	I1009 19:05:52.931179  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:52.931557  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:52.931624  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:53.431339  161014 type.go:168] "Request Body" body=""
	I1009 19:05:53.431459  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:53.431829  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:53.931637  161014 type.go:168] "Request Body" body=""
	I1009 19:05:53.931736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:53.932120  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:54.430920  161014 type.go:168] "Request Body" body=""
	I1009 19:05:54.431014  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:54.431426  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:54.931205  161014 type.go:168] "Request Body" body=""
	I1009 19:05:54.931299  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:54.931695  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:54.931776  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:55.431596  161014 type.go:168] "Request Body" body=""
	I1009 19:05:55.431674  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:55.432023  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:55.931858  161014 type.go:168] "Request Body" body=""
	I1009 19:05:55.931949  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:55.932317  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:56.431017  161014 type.go:168] "Request Body" body=""
	I1009 19:05:56.431102  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:56.431477  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:56.931242  161014 type.go:168] "Request Body" body=""
	I1009 19:05:56.931328  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:56.931740  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:56.931822  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:57.431701  161014 type.go:168] "Request Body" body=""
	I1009 19:05:57.431787  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:57.432169  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:57.931004  161014 type.go:168] "Request Body" body=""
	I1009 19:05:57.931088  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:57.931492  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:58.430896  161014 type.go:168] "Request Body" body=""
	I1009 19:05:58.430977  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:58.431316  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:58.931136  161014 type.go:168] "Request Body" body=""
	I1009 19:05:58.931305  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:58.931701  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:59.431527  161014 type.go:168] "Request Body" body=""
	I1009 19:05:59.431619  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:59.431986  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:59.432056  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:59.931914  161014 type.go:168] "Request Body" body=""
	I1009 19:05:59.932022  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:59.932451  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:00.431235  161014 type.go:168] "Request Body" body=""
	I1009 19:06:00.431321  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:00.431700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:00.931491  161014 type.go:168] "Request Body" body=""
	I1009 19:06:00.931598  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:00.932038  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:01.430870  161014 type.go:168] "Request Body" body=""
	I1009 19:06:01.430962  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:01.431351  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:01.931160  161014 type.go:168] "Request Body" body=""
	I1009 19:06:01.931259  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:01.931701  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:01.931781  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:02.431642  161014 type.go:168] "Request Body" body=""
	I1009 19:06:02.431741  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:02.432105  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:02.930912  161014 type.go:168] "Request Body" body=""
	I1009 19:06:02.931026  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:02.931440  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:03.431233  161014 type.go:168] "Request Body" body=""
	I1009 19:06:03.431316  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:03.431698  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:03.931548  161014 type.go:168] "Request Body" body=""
	I1009 19:06:03.931627  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:03.932000  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:03.932085  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:04.431884  161014 type.go:168] "Request Body" body=""
	I1009 19:06:04.431963  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:04.432329  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:04.931167  161014 type.go:168] "Request Body" body=""
	I1009 19:06:04.931256  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:04.931675  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:05.431519  161014 type.go:168] "Request Body" body=""
	I1009 19:06:05.431593  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:05.431983  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:05.931927  161014 type.go:168] "Request Body" body=""
	I1009 19:06:05.932019  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:05.932421  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:05.932517  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:06.431278  161014 type.go:168] "Request Body" body=""
	I1009 19:06:06.431359  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:06.431798  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:06.931667  161014 type.go:168] "Request Body" body=""
	I1009 19:06:06.931753  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:06.932149  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:07.430942  161014 type.go:168] "Request Body" body=""
	I1009 19:06:07.431028  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:07.431419  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:07.513672  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:06:07.571073  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:07.571125  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:07.571145  161014 retry.go:31] will retry after 23.39838606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:07.931687  161014 type.go:168] "Request Body" body=""
	I1009 19:06:07.931784  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:07.932135  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:08.430924  161014 type.go:168] "Request Body" body=""
	I1009 19:06:08.431034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:08.431403  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:08.431469  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:08.931208  161014 type.go:168] "Request Body" body=""
	I1009 19:06:08.931288  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:08.931643  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:09.431520  161014 type.go:168] "Request Body" body=""
	I1009 19:06:09.431629  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:09.432018  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:09.931868  161014 type.go:168] "Request Body" body=""
	I1009 19:06:09.931945  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:09.932304  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:10.431151  161014 type.go:168] "Request Body" body=""
	I1009 19:06:10.431248  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:10.431669  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:10.431738  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:10.931500  161014 type.go:168] "Request Body" body=""
	I1009 19:06:10.931584  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:10.931948  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:11.431952  161014 type.go:168] "Request Body" body=""
	I1009 19:06:11.432052  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:11.432455  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:11.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:06:11.931310  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:11.931689  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:12.181131  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:06:12.238294  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:12.238358  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:12.238405  161014 retry.go:31] will retry after 21.481583015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:12.431681  161014 type.go:168] "Request Body" body=""
	I1009 19:06:12.431761  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:12.432057  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:12.432128  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:12.931845  161014 type.go:168] "Request Body" body=""
	I1009 19:06:12.931939  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:12.932415  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:13.431004  161014 type.go:168] "Request Body" body=""
	I1009 19:06:13.431117  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:13.431483  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:13.931308  161014 type.go:168] "Request Body" body=""
	I1009 19:06:13.931404  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:13.931807  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:14.431415  161014 type.go:168] "Request Body" body=""
	I1009 19:06:14.431502  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:14.431906  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:14.931635  161014 type.go:168] "Request Body" body=""
	I1009 19:06:14.931725  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:14.932138  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:14.932205  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:15.431840  161014 type.go:168] "Request Body" body=""
	I1009 19:06:15.431927  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:15.432292  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:15.930896  161014 type.go:168] "Request Body" body=""
	I1009 19:06:15.930996  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:15.931404  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:16.431000  161014 type.go:168] "Request Body" body=""
	I1009 19:06:16.431088  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:16.431482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:16.931089  161014 type.go:168] "Request Body" body=""
	I1009 19:06:16.931169  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:16.931606  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:17.431187  161014 type.go:168] "Request Body" body=""
	I1009 19:06:17.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:17.431645  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:17.431717  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:17.931505  161014 type.go:168] "Request Body" body=""
	I1009 19:06:17.931588  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:17.931977  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:18.431663  161014 type.go:168] "Request Body" body=""
	I1009 19:06:18.431753  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:18.432133  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:18.931039  161014 type.go:168] "Request Body" body=""
	I1009 19:06:18.931125  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:18.931496  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:19.431026  161014 type.go:168] "Request Body" body=""
	I1009 19:06:19.431101  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:19.431425  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:19.931079  161014 type.go:168] "Request Body" body=""
	I1009 19:06:19.931160  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:19.931533  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:19.931605  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:20.431142  161014 type.go:168] "Request Body" body=""
	I1009 19:06:20.431225  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:20.431606  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:20.931205  161014 type.go:168] "Request Body" body=""
	I1009 19:06:20.931288  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:20.931685  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:21.431270  161014 type.go:168] "Request Body" body=""
	I1009 19:06:21.431352  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:21.431727  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:21.931351  161014 type.go:168] "Request Body" body=""
	I1009 19:06:21.931459  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:21.931867  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:21.931960  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:22.431630  161014 type.go:168] "Request Body" body=""
	I1009 19:06:22.431720  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:22.432112  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:22.931909  161014 type.go:168] "Request Body" body=""
	I1009 19:06:22.932006  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:22.932466  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:23.431019  161014 type.go:168] "Request Body" body=""
	I1009 19:06:23.431108  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:23.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:23.931352  161014 type.go:168] "Request Body" body=""
	I1009 19:06:23.931464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:23.931866  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:24.430863  161014 type.go:168] "Request Body" body=""
	I1009 19:06:24.430951  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:24.431355  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:24.431478  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:24.930971  161014 type.go:168] "Request Body" body=""
	I1009 19:06:24.931061  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:24.931476  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:25.431052  161014 type.go:168] "Request Body" body=""
	I1009 19:06:25.431143  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:25.431497  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:25.931072  161014 type.go:168] "Request Body" body=""
	I1009 19:06:25.931164  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:25.931594  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:26.430916  161014 type.go:168] "Request Body" body=""
	I1009 19:06:26.431010  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:26.431407  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:26.931057  161014 type.go:168] "Request Body" body=""
	I1009 19:06:26.931135  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:26.931533  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:26.931610  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:27.431142  161014 type.go:168] "Request Body" body=""
	I1009 19:06:27.431220  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:27.431622  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:27.931665  161014 type.go:168] "Request Body" body=""
	I1009 19:06:27.931758  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:27.932163  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:28.431861  161014 type.go:168] "Request Body" body=""
	I1009 19:06:28.431949  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:28.432310  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:28.931285  161014 type.go:168] "Request Body" body=""
	I1009 19:06:28.931362  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:28.931821  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:28.931892  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:29.431462  161014 type.go:168] "Request Body" body=""
	I1009 19:06:29.431547  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:29.432004  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:29.931701  161014 type.go:168] "Request Body" body=""
	I1009 19:06:29.931782  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:29.932180  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:30.431935  161014 type.go:168] "Request Body" body=""
	I1009 19:06:30.432026  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:30.432405  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:30.931026  161014 type.go:168] "Request Body" body=""
	I1009 19:06:30.931109  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:30.931522  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:30.970755  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:06:31.028107  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:31.028174  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:31.028309  161014 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:06:31.431764  161014 type.go:168] "Request Body" body=""
	I1009 19:06:31.431853  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:31.432208  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:31.432284  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:31.930867  161014 type.go:168] "Request Body" body=""
	I1009 19:06:31.930984  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:31.931336  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:32.430958  161014 type.go:168] "Request Body" body=""
	I1009 19:06:32.431047  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:32.431465  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:32.931031  161014 type.go:168] "Request Body" body=""
	I1009 19:06:32.931127  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:32.931496  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:33.431116  161014 type.go:168] "Request Body" body=""
	I1009 19:06:33.431195  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:33.431601  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:33.721082  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:06:33.781514  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:33.781597  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:33.781723  161014 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:06:33.784570  161014 out.go:179] * Enabled addons: 
	I1009 19:06:33.786444  161014 addons.go:514] duration metric: took 1m35.976729521s for enable addons: enabled=[]
	I1009 19:06:33.931193  161014 type.go:168] "Request Body" body=""
	I1009 19:06:33.931298  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:33.931708  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:33.931785  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:34.431608  161014 type.go:168] "Request Body" body=""
	I1009 19:06:34.431688  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:34.432050  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:34.931894  161014 type.go:168] "Request Body" body=""
	I1009 19:06:34.931982  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:34.932369  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:35.431177  161014 type.go:168] "Request Body" body=""
	I1009 19:06:35.431261  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:35.431656  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:35.931508  161014 type.go:168] "Request Body" body=""
	I1009 19:06:35.931632  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:35.932017  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:35.932080  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:36.431933  161014 type.go:168] "Request Body" body=""
	I1009 19:06:36.432042  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:36.432446  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:36.931225  161014 type.go:168] "Request Body" body=""
	I1009 19:06:36.931328  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:36.931704  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:37.431625  161014 type.go:168] "Request Body" body=""
	I1009 19:06:37.431738  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:37.432141  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:37.930857  161014 type.go:168] "Request Body" body=""
	I1009 19:06:37.930995  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:37.931342  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:38.431133  161014 type.go:168] "Request Body" body=""
	I1009 19:06:38.431214  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:38.431597  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:38.431683  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:38.931462  161014 type.go:168] "Request Body" body=""
	I1009 19:06:38.931563  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:38.931971  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:39.431871  161014 type.go:168] "Request Body" body=""
	I1009 19:06:39.431963  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:39.432315  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:39.931128  161014 type.go:168] "Request Body" body=""
	I1009 19:06:39.931220  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:39.931618  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:40.431437  161014 type.go:168] "Request Body" body=""
	I1009 19:06:40.431514  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:40.431890  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:40.431961  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:40.931810  161014 type.go:168] "Request Body" body=""
	I1009 19:06:40.931912  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:40.932290  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:41.431100  161014 type.go:168] "Request Body" body=""
	I1009 19:06:41.431218  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:41.431599  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:41.931346  161014 type.go:168] "Request Body" body=""
	I1009 19:06:41.931468  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:41.931837  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:42.431757  161014 type.go:168] "Request Body" body=""
	I1009 19:06:42.431845  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:42.432237  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:42.432298  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:42.931019  161014 type.go:168] "Request Body" body=""
	I1009 19:06:42.931113  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:42.931521  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:43.431303  161014 type.go:168] "Request Body" body=""
	I1009 19:06:43.431415  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:43.431782  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:43.931780  161014 type.go:168] "Request Body" body=""
	I1009 19:06:43.931864  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:43.932272  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:44.431107  161014 type.go:168] "Request Body" body=""
	I1009 19:06:44.431212  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:44.431609  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:44.931522  161014 type.go:168] "Request Body" body=""
	I1009 19:06:44.931612  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:44.932005  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:44.932091  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:45.430863  161014 type.go:168] "Request Body" body=""
	I1009 19:06:45.430955  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:45.431333  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:45.931195  161014 type.go:168] "Request Body" body=""
	I1009 19:06:45.931296  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:45.931727  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:46.431598  161014 type.go:168] "Request Body" body=""
	I1009 19:06:46.431682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:46.432089  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:46.930909  161014 type.go:168] "Request Body" body=""
	I1009 19:06:46.931014  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:46.931410  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:47.431166  161014 type.go:168] "Request Body" body=""
	I1009 19:06:47.431244  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:47.431610  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:47.431679  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:47.931409  161014 type.go:168] "Request Body" body=""
	I1009 19:06:47.931495  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:47.931852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:48.431707  161014 type.go:168] "Request Body" body=""
	I1009 19:06:48.431803  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:48.432224  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:48.931113  161014 type.go:168] "Request Body" body=""
	I1009 19:06:48.931196  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:48.931590  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:49.431438  161014 type.go:168] "Request Body" body=""
	I1009 19:06:49.431532  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:49.431933  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:49.432014  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:49.931847  161014 type.go:168] "Request Body" body=""
	I1009 19:06:49.931955  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:49.932365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:50.431151  161014 type.go:168] "Request Body" body=""
	I1009 19:06:50.431252  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:50.431731  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:50.931584  161014 type.go:168] "Request Body" body=""
	I1009 19:06:50.931668  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:50.932034  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:51.431892  161014 type.go:168] "Request Body" body=""
	I1009 19:06:51.432001  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:51.432357  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:51.432451  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:51.931169  161014 type.go:168] "Request Body" body=""
	I1009 19:06:51.931251  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:51.931649  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:52.431585  161014 type.go:168] "Request Body" body=""
	I1009 19:06:52.431683  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:52.432058  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:52.931903  161014 type.go:168] "Request Body" body=""
	I1009 19:06:52.931994  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:52.932365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:53.431140  161014 type.go:168] "Request Body" body=""
	I1009 19:06:53.431240  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:53.431607  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:53.931515  161014 type.go:168] "Request Body" body=""
	I1009 19:06:53.931602  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:53.931970  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:53.932045  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:54.431874  161014 type.go:168] "Request Body" body=""
	I1009 19:06:54.431956  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:54.432333  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:54.931110  161014 type.go:168] "Request Body" body=""
	I1009 19:06:54.931191  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:54.931572  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:55.431313  161014 type.go:168] "Request Body" body=""
	I1009 19:06:55.431422  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:55.431759  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:55.931628  161014 type.go:168] "Request Body" body=""
	I1009 19:06:55.931708  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:55.932052  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:55.932122  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:56.430861  161014 type.go:168] "Request Body" body=""
	I1009 19:06:56.430953  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:56.431299  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:56.931073  161014 type.go:168] "Request Body" body=""
	I1009 19:06:56.931162  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:56.931537  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:57.431318  161014 type.go:168] "Request Body" body=""
	I1009 19:06:57.431417  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:57.431759  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:57.931618  161014 type.go:168] "Request Body" body=""
	I1009 19:06:57.931839  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:57.932218  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:57.932279  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:58.431144  161014 type.go:168] "Request Body" body=""
	I1009 19:06:58.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:58.431970  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:58.931861  161014 type.go:168] "Request Body" body=""
	I1009 19:06:58.931940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:58.932311  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:59.431143  161014 type.go:168] "Request Body" body=""
	I1009 19:06:59.431223  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:59.431592  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:59.930909  161014 type.go:168] "Request Body" body=""
	I1009 19:06:59.931020  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:59.931371  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:00.430999  161014 type.go:168] "Request Body" body=""
	I1009 19:07:00.431081  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:00.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:00.431566  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:00.931093  161014 type.go:168] "Request Body" body=""
	I1009 19:07:00.931180  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:00.931581  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:01.431360  161014 type.go:168] "Request Body" body=""
	I1009 19:07:01.431464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:01.431832  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:01.931692  161014 type.go:168] "Request Body" body=""
	I1009 19:07:01.931784  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:01.932184  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:02.430934  161014 type.go:168] "Request Body" body=""
	I1009 19:07:02.431018  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:02.431378  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:02.931191  161014 type.go:168] "Request Body" body=""
	I1009 19:07:02.931275  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:02.931689  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:02.931756  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:03.431523  161014 type.go:168] "Request Body" body=""
	I1009 19:07:03.431604  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:03.431991  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:03.930871  161014 type.go:168] "Request Body" body=""
	I1009 19:07:03.930969  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:03.931407  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:04.431200  161014 type.go:168] "Request Body" body=""
	I1009 19:07:04.431281  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:04.431686  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:04.931603  161014 type.go:168] "Request Body" body=""
	I1009 19:07:04.931684  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:04.932085  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:04.932154  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:05.430888  161014 type.go:168] "Request Body" body=""
	I1009 19:07:05.430980  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:05.431365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:05.931176  161014 type.go:168] "Request Body" body=""
	I1009 19:07:05.931266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:05.931718  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:06.431607  161014 type.go:168] "Request Body" body=""
	I1009 19:07:06.431688  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:06.432075  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:06.930900  161014 type.go:168] "Request Body" body=""
	I1009 19:07:06.931004  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:06.931420  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:07.431211  161014 type.go:168] "Request Body" body=""
	I1009 19:07:07.431297  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:07.431674  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:07.431738  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:07.931521  161014 type.go:168] "Request Body" body=""
	I1009 19:07:07.931600  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:07.931988  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:08.431938  161014 type.go:168] "Request Body" body=""
	I1009 19:07:08.432023  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:08.432368  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:08.931198  161014 type.go:168] "Request Body" body=""
	I1009 19:07:08.931276  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:08.931670  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:09.431634  161014 type.go:168] "Request Body" body=""
	I1009 19:07:09.431764  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:09.432193  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:09.432271  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:09.931021  161014 type.go:168] "Request Body" body=""
	I1009 19:07:09.931112  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:09.931511  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:10.431319  161014 type.go:168] "Request Body" body=""
	I1009 19:07:10.431421  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:10.431776  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:10.931586  161014 type.go:168] "Request Body" body=""
	I1009 19:07:10.931675  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:10.932028  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:11.431928  161014 type.go:168] "Request Body" body=""
	I1009 19:07:11.432018  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:11.432409  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:11.432493  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:11.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:07:11.931314  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:11.931691  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:12.431493  161014 type.go:168] "Request Body" body=""
	I1009 19:07:12.431576  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:12.431970  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:12.931830  161014 type.go:168] "Request Body" body=""
	I1009 19:07:12.931910  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:12.932268  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:13.431040  161014 type.go:168] "Request Body" body=""
	I1009 19:07:13.431128  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:13.431515  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:13.931313  161014 type.go:168] "Request Body" body=""
	I1009 19:07:13.931411  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:13.931829  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:13.931895  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:14.431732  161014 type.go:168] "Request Body" body=""
	I1009 19:07:14.431830  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:14.432198  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:14.931016  161014 type.go:168] "Request Body" body=""
	I1009 19:07:14.931107  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:14.931472  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:15.431233  161014 type.go:168] "Request Body" body=""
	I1009 19:07:15.431326  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:15.431754  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:15.931605  161014 type.go:168] "Request Body" body=""
	I1009 19:07:15.931691  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:15.932042  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:15.932112  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:16.430847  161014 type.go:168] "Request Body" body=""
	I1009 19:07:16.430926  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:16.431288  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:16.931059  161014 type.go:168] "Request Body" body=""
	I1009 19:07:16.931135  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:16.931483  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:17.431236  161014 type.go:168] "Request Body" body=""
	I1009 19:07:17.431328  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:17.431725  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:17.931584  161014 type.go:168] "Request Body" body=""
	I1009 19:07:17.931680  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:17.932068  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:17.932144  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:18.430878  161014 type.go:168] "Request Body" body=""
	I1009 19:07:18.430959  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:18.431336  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:18.931220  161014 type.go:168] "Request Body" body=""
	I1009 19:07:18.931308  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:18.931716  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:19.431622  161014 type.go:168] "Request Body" body=""
	I1009 19:07:19.431711  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:19.432084  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:19.930887  161014 type.go:168] "Request Body" body=""
	I1009 19:07:19.930970  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:19.931335  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:20.431128  161014 type.go:168] "Request Body" body=""
	I1009 19:07:20.431228  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:20.431607  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:20.431677  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:20.931571  161014 type.go:168] "Request Body" body=""
	I1009 19:07:20.931652  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:20.932025  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:21.431914  161014 type.go:168] "Request Body" body=""
	I1009 19:07:21.432004  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:21.432437  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:21.931260  161014 type.go:168] "Request Body" body=""
	I1009 19:07:21.931349  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:21.931776  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:22.431637  161014 type.go:168] "Request Body" body=""
	I1009 19:07:22.431729  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:22.432091  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:22.432158  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:22.930926  161014 type.go:168] "Request Body" body=""
	I1009 19:07:22.931021  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:22.931412  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:23.431182  161014 type.go:168] "Request Body" body=""
	I1009 19:07:23.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:23.431631  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:23.931458  161014 type.go:168] "Request Body" body=""
	I1009 19:07:23.931550  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:23.931920  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:24.431853  161014 type.go:168] "Request Body" body=""
	I1009 19:07:24.431948  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:24.432326  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:24.432422  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:24.931143  161014 type.go:168] "Request Body" body=""
	I1009 19:07:24.931223  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:24.931599  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:25.431358  161014 type.go:168] "Request Body" body=""
	I1009 19:07:25.431464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:25.431821  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:25.931703  161014 type.go:168] "Request Body" body=""
	I1009 19:07:25.931787  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:25.932180  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:26.430976  161014 type.go:168] "Request Body" body=""
	I1009 19:07:26.431075  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:26.431458  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:26.931245  161014 type.go:168] "Request Body" body=""
	I1009 19:07:26.931331  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:26.931713  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:26.931784  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:27.431576  161014 type.go:168] "Request Body" body=""
	I1009 19:07:27.431668  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:27.432031  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:27.931772  161014 type.go:168] "Request Body" body=""
	I1009 19:07:27.931862  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:27.932254  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:28.431022  161014 type.go:168] "Request Body" body=""
	I1009 19:07:28.431102  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:28.431482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:28.931348  161014 type.go:168] "Request Body" body=""
	I1009 19:07:28.931459  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:28.931844  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:28.931911  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:29.431781  161014 type.go:168] "Request Body" body=""
	I1009 19:07:29.431865  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:29.432226  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:29.931019  161014 type.go:168] "Request Body" body=""
	I1009 19:07:29.931099  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:29.931495  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:30.431235  161014 type.go:168] "Request Body" body=""
	I1009 19:07:30.431318  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:30.431699  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:30.931618  161014 type.go:168] "Request Body" body=""
	I1009 19:07:30.931726  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:30.932096  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:30.932155  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:31.430950  161014 type.go:168] "Request Body" body=""
	I1009 19:07:31.431039  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:31.431429  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:31.931226  161014 type.go:168] "Request Body" body=""
	I1009 19:07:31.931324  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:31.931743  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:32.431688  161014 type.go:168] "Request Body" body=""
	I1009 19:07:32.431781  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:32.432184  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:32.930987  161014 type.go:168] "Request Body" body=""
	I1009 19:07:32.931070  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:32.931473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:33.431239  161014 type.go:168] "Request Body" body=""
	I1009 19:07:33.431321  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:33.431722  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:33.431792  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:33.931516  161014 type.go:168] "Request Body" body=""
	I1009 19:07:33.931606  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:33.931963  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:34.431848  161014 type.go:168] "Request Body" body=""
	I1009 19:07:34.431929  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:34.432337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:34.931149  161014 type.go:168] "Request Body" body=""
	I1009 19:07:34.931233  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:34.931610  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:35.431433  161014 type.go:168] "Request Body" body=""
	I1009 19:07:35.431519  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:35.431884  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:35.431951  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:35.931757  161014 type.go:168] "Request Body" body=""
	I1009 19:07:35.931834  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:35.932194  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:36.431002  161014 type.go:168] "Request Body" body=""
	I1009 19:07:36.431092  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:36.431521  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:36.931304  161014 type.go:168] "Request Body" body=""
	I1009 19:07:36.931415  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:36.931771  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:37.431635  161014 type.go:168] "Request Body" body=""
	I1009 19:07:37.431735  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:37.432135  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:37.432203  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:37.931637  161014 type.go:168] "Request Body" body=""
	I1009 19:07:37.931755  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:37.932124  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:38.430922  161014 type.go:168] "Request Body" body=""
	I1009 19:07:38.431020  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:38.431405  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:38.931217  161014 type.go:168] "Request Body" body=""
	I1009 19:07:38.931295  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:38.931651  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:39.431495  161014 type.go:168] "Request Body" body=""
	I1009 19:07:39.431575  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:39.431936  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:39.931858  161014 type.go:168] "Request Body" body=""
	I1009 19:07:39.931940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:39.932326  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:39.932421  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:40.431161  161014 type.go:168] "Request Body" body=""
	I1009 19:07:40.431255  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:40.431615  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:40.931366  161014 type.go:168] "Request Body" body=""
	I1009 19:07:40.931491  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:40.931869  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:41.431767  161014 type.go:168] "Request Body" body=""
	I1009 19:07:41.431861  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:41.432350  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:41.931160  161014 type.go:168] "Request Body" body=""
	I1009 19:07:41.931250  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:41.931735  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:42.431633  161014 type.go:168] "Request Body" body=""
	I1009 19:07:42.431732  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:42.432111  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:42.432176  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:42.930929  161014 type.go:168] "Request Body" body=""
	I1009 19:07:42.931031  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:42.931442  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:43.431234  161014 type.go:168] "Request Body" body=""
	I1009 19:07:43.431336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:43.431722  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:43.931601  161014 type.go:168] "Request Body" body=""
	I1009 19:07:43.931683  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:43.932053  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:44.430865  161014 type.go:168] "Request Body" body=""
	I1009 19:07:44.430947  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:44.431356  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:44.931167  161014 type.go:168] "Request Body" body=""
	I1009 19:07:44.931250  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:44.931627  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:44.931696  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:45.431431  161014 type.go:168] "Request Body" body=""
	I1009 19:07:45.431510  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:45.431847  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:45.931770  161014 type.go:168] "Request Body" body=""
	I1009 19:07:45.931853  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:45.932210  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:46.430939  161014 type.go:168] "Request Body" body=""
	I1009 19:07:46.431018  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:46.431347  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:46.931133  161014 type.go:168] "Request Body" body=""
	I1009 19:07:46.931213  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:46.931599  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:47.431337  161014 type.go:168] "Request Body" body=""
	I1009 19:07:47.431449  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:47.431806  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:47.431876  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:47.931598  161014 type.go:168] "Request Body" body=""
	I1009 19:07:47.931682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:47.932028  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:48.431835  161014 type.go:168] "Request Body" body=""
	I1009 19:07:48.431919  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:48.432273  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:48.931089  161014 type.go:168] "Request Body" body=""
	I1009 19:07:48.931179  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:48.931527  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:49.431272  161014 type.go:168] "Request Body" body=""
	I1009 19:07:49.431350  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:49.431715  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:49.931579  161014 type.go:168] "Request Body" body=""
	I1009 19:07:49.931664  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:49.932040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:49.932107  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:50.431582  161014 type.go:168] "Request Body" body=""
	I1009 19:07:50.431662  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:50.432003  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:50.931872  161014 type.go:168] "Request Body" body=""
	I1009 19:07:50.931951  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:50.932322  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:51.431016  161014 type.go:168] "Request Body" body=""
	I1009 19:07:51.431095  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:51.431478  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:51.931270  161014 type.go:168] "Request Body" body=""
	I1009 19:07:51.931349  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:51.931734  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:52.431662  161014 type.go:168] "Request Body" body=""
	I1009 19:07:52.431743  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:52.432165  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:52.432255  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:52.931027  161014 type.go:168] "Request Body" body=""
	I1009 19:07:52.931111  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:52.931524  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:53.431299  161014 type.go:168] "Request Body" body=""
	I1009 19:07:53.431409  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:53.431777  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:53.931692  161014 type.go:168] "Request Body" body=""
	I1009 19:07:53.931802  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:53.932188  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:54.431033  161014 type.go:168] "Request Body" body=""
	I1009 19:07:54.431116  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:54.431506  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:54.931262  161014 type.go:168] "Request Body" body=""
	I1009 19:07:54.931371  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:54.931818  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:54.931896  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:55.431748  161014 type.go:168] "Request Body" body=""
	I1009 19:07:55.431839  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:55.432227  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:55.931001  161014 type.go:168] "Request Body" body=""
	I1009 19:07:55.931091  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:55.931464  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:56.431257  161014 type.go:168] "Request Body" body=""
	I1009 19:07:56.431342  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:56.431727  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:56.931602  161014 type.go:168] "Request Body" body=""
	I1009 19:07:56.931701  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:56.932081  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:56.932152  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:57.430910  161014 type.go:168] "Request Body" body=""
	I1009 19:07:57.431002  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:57.431362  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:57.931308  161014 type.go:168] "Request Body" body=""
	I1009 19:07:57.931413  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:57.931773  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:58.431643  161014 type.go:168] "Request Body" body=""
	I1009 19:07:58.431802  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:58.432134  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:58.931081  161014 type.go:168] "Request Body" body=""
	I1009 19:07:58.931169  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:58.931540  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:59.431310  161014 type.go:168] "Request Body" body=""
	I1009 19:07:59.431416  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:59.431835  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:59.431910  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:59.931738  161014 type.go:168] "Request Body" body=""
	I1009 19:07:59.931826  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:59.932198  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:00.430977  161014 type.go:168] "Request Body" body=""
	I1009 19:08:00.431073  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:00.431459  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:00.931240  161014 type.go:168] "Request Body" body=""
	I1009 19:08:00.931327  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:00.931726  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:01.431608  161014 type.go:168] "Request Body" body=""
	I1009 19:08:01.431703  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:01.432081  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:01.432155  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:01.930901  161014 type.go:168] "Request Body" body=""
	I1009 19:08:01.930998  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:01.931353  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:02.431155  161014 type.go:168] "Request Body" body=""
	I1009 19:08:02.431246  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:02.431683  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:02.931507  161014 type.go:168] "Request Body" body=""
	I1009 19:08:02.931648  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:02.932004  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:03.431604  161014 type.go:168] "Request Body" body=""
	I1009 19:08:03.431682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:03.432043  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:03.930851  161014 type.go:168] "Request Body" body=""
	I1009 19:08:03.930932  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:03.931328  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:03.931434  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:04.431148  161014 type.go:168] "Request Body" body=""
	I1009 19:08:04.431226  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:04.431671  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:04.931497  161014 type.go:168] "Request Body" body=""
	I1009 19:08:04.931576  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:04.931933  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:05.431818  161014 type.go:168] "Request Body" body=""
	I1009 19:08:05.431913  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:05.432337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:05.931111  161014 type.go:168] "Request Body" body=""
	I1009 19:08:05.931188  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:05.931598  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:05.931665  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:06.431433  161014 type.go:168] "Request Body" body=""
	I1009 19:08:06.431518  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:06.431897  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:06.931739  161014 type.go:168] "Request Body" body=""
	I1009 19:08:06.931825  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:06.932190  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:07.431010  161014 type.go:168] "Request Body" body=""
	I1009 19:08:07.431098  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:07.431492  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:07.931321  161014 type.go:168] "Request Body" body=""
	I1009 19:08:07.931478  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:07.931847  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:07.931911  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:08.431736  161014 type.go:168] "Request Body" body=""
	I1009 19:08:08.431826  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:08.432199  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:08.931147  161014 type.go:168] "Request Body" body=""
	I1009 19:08:08.931256  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:08.931581  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:09.431348  161014 type.go:168] "Request Body" body=""
	I1009 19:08:09.431501  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:09.431847  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:09.931761  161014 type.go:168] "Request Body" body=""
	I1009 19:08:09.931868  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:09.932264  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:09.932358  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:10.431111  161014 type.go:168] "Request Body" body=""
	I1009 19:08:10.431226  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:10.431600  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:10.931402  161014 type.go:168] "Request Body" body=""
	I1009 19:08:10.931502  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:10.931871  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:11.431784  161014 type.go:168] "Request Body" body=""
	I1009 19:08:11.431872  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:11.432233  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:11.931048  161014 type.go:168] "Request Body" body=""
	I1009 19:08:11.931144  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:11.931576  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:12.431421  161014 type.go:168] "Request Body" body=""
	I1009 19:08:12.431503  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:12.431862  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:12.431928  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:12.931757  161014 type.go:168] "Request Body" body=""
	I1009 19:08:12.931854  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:12.932305  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:13.431097  161014 type.go:168] "Request Body" body=""
	I1009 19:08:13.431185  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:13.431628  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:13.931448  161014 type.go:168] "Request Body" body=""
	I1009 19:08:13.931544  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:13.931895  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:14.431813  161014 type.go:168] "Request Body" body=""
	I1009 19:08:14.431896  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:14.432325  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:14.432452  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:14.931193  161014 type.go:168] "Request Body" body=""
	I1009 19:08:14.931304  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:14.931724  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:15.431610  161014 type.go:168] "Request Body" body=""
	I1009 19:08:15.431784  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:15.432189  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:15.930996  161014 type.go:168] "Request Body" body=""
	I1009 19:08:15.931076  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:15.931476  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:16.431279  161014 type.go:168] "Request Body" body=""
	I1009 19:08:16.431364  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:16.431823  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:16.931708  161014 type.go:168] "Request Body" body=""
	I1009 19:08:16.931791  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:16.932165  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:16.932241  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:17.430990  161014 type.go:168] "Request Body" body=""
	I1009 19:08:17.431074  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:17.431506  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:17.931431  161014 type.go:168] "Request Body" body=""
	I1009 19:08:17.931525  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:17.931892  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:18.431806  161014 type.go:168] "Request Body" body=""
	I1009 19:08:18.431897  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:18.432299  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:18.931120  161014 type.go:168] "Request Body" body=""
	I1009 19:08:18.931214  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:18.931614  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:19.431514  161014 type.go:168] "Request Body" body=""
	I1009 19:08:19.431606  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:19.432047  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:19.432124  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:19.931598  161014 type.go:168] "Request Body" body=""
	I1009 19:08:19.931691  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:19.932042  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:20.431891  161014 type.go:168] "Request Body" body=""
	I1009 19:08:20.431971  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:20.432405  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:20.931180  161014 type.go:168] "Request Body" body=""
	I1009 19:08:20.931263  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:20.931621  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:21.431543  161014 type.go:168] "Request Body" body=""
	I1009 19:08:21.431622  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:21.432021  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:21.931880  161014 type.go:168] "Request Body" body=""
	I1009 19:08:21.931973  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:21.932344  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:21.932455  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:22.431220  161014 type.go:168] "Request Body" body=""
	I1009 19:08:22.431312  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:22.431735  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:22.931611  161014 type.go:168] "Request Body" body=""
	I1009 19:08:22.931692  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:22.932047  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:23.430844  161014 type.go:168] "Request Body" body=""
	I1009 19:08:23.430928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:23.431339  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:23.931177  161014 type.go:168] "Request Body" body=""
	I1009 19:08:23.931280  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:23.931703  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:24.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:08:24.431623  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:24.432029  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:24.432099  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:24.930846  161014 type.go:168] "Request Body" body=""
	I1009 19:08:24.930940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:24.931301  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:25.431093  161014 type.go:168] "Request Body" body=""
	I1009 19:08:25.431180  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:25.431586  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:25.931364  161014 type.go:168] "Request Body" body=""
	I1009 19:08:25.931490  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:25.931848  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:26.431757  161014 type.go:168] "Request Body" body=""
	I1009 19:08:26.431844  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:26.432286  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:26.432356  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:26.931111  161014 type.go:168] "Request Body" body=""
	I1009 19:08:26.931219  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:26.931654  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:27.431562  161014 type.go:168] "Request Body" body=""
	I1009 19:08:27.431657  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:27.432104  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:27.931917  161014 type.go:168] "Request Body" body=""
	I1009 19:08:27.932031  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:27.932479  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:28.431253  161014 type.go:168] "Request Body" body=""
	I1009 19:08:28.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:28.431741  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:28.931701  161014 type.go:168] "Request Body" body=""
	I1009 19:08:28.931793  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:28.932147  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:28.932231  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:29.430994  161014 type.go:168] "Request Body" body=""
	I1009 19:08:29.431076  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:29.431507  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:29.931284  161014 type.go:168] "Request Body" body=""
	I1009 19:08:29.931372  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:29.931786  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:30.431725  161014 type.go:168] "Request Body" body=""
	I1009 19:08:30.431807  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:30.432196  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:30.930995  161014 type.go:168] "Request Body" body=""
	I1009 19:08:30.931086  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:30.931489  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:31.431293  161014 type.go:168] "Request Body" body=""
	I1009 19:08:31.431407  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:31.431802  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:31.431899  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:31.931763  161014 type.go:168] "Request Body" body=""
	I1009 19:08:31.931847  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:31.932233  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:32.431064  161014 type.go:168] "Request Body" body=""
	I1009 19:08:32.431143  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:32.431569  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:32.931367  161014 type.go:168] "Request Body" body=""
	I1009 19:08:32.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:32.931834  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:33.431666  161014 type.go:168] "Request Body" body=""
	I1009 19:08:33.431746  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:33.432152  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:33.432228  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:33.931085  161014 type.go:168] "Request Body" body=""
	I1009 19:08:33.931187  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:33.931603  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:34.431399  161014 type.go:168] "Request Body" body=""
	I1009 19:08:34.431485  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:34.431891  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:34.931782  161014 type.go:168] "Request Body" body=""
	I1009 19:08:34.931877  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:34.932244  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:35.431033  161014 type.go:168] "Request Body" body=""
	I1009 19:08:35.431120  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:35.431472  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:35.931247  161014 type.go:168] "Request Body" body=""
	I1009 19:08:35.931336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:35.931759  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:35.931829  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:36.431682  161014 type.go:168] "Request Body" body=""
	I1009 19:08:36.431785  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:36.432193  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:36.931013  161014 type.go:168] "Request Body" body=""
	I1009 19:08:36.931099  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:36.931470  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:37.431265  161014 type.go:168] "Request Body" body=""
	I1009 19:08:37.431370  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:37.431819  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:37.931612  161014 type.go:168] "Request Body" body=""
	I1009 19:08:37.931700  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:37.932079  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:37.932145  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:38.430913  161014 type.go:168] "Request Body" body=""
	I1009 19:08:38.431022  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:38.431519  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:38.931233  161014 type.go:168] "Request Body" body=""
	I1009 19:08:38.931319  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:38.931686  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:39.431521  161014 type.go:168] "Request Body" body=""
	I1009 19:08:39.431627  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:39.432049  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:39.931904  161014 type.go:168] "Request Body" body=""
	I1009 19:08:39.932008  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:39.932353  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:39.932451  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:40.431183  161014 type.go:168] "Request Body" body=""
	I1009 19:08:40.431282  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:40.431716  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:40.931624  161014 type.go:168] "Request Body" body=""
	I1009 19:08:40.931713  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:40.932079  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:41.430889  161014 type.go:168] "Request Body" body=""
	I1009 19:08:41.430987  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:41.431423  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:41.931240  161014 type.go:168] "Request Body" body=""
	I1009 19:08:41.931324  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:41.931700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:42.431534  161014 type.go:168] "Request Body" body=""
	I1009 19:08:42.431639  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:42.432064  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:42.432142  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:42.930885  161014 type.go:168] "Request Body" body=""
	I1009 19:08:42.930975  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:42.931354  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:43.431227  161014 type.go:168] "Request Body" body=""
	I1009 19:08:43.431323  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:43.431715  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:43.931552  161014 type.go:168] "Request Body" body=""
	I1009 19:08:43.931632  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:43.931992  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:44.431828  161014 type.go:168] "Request Body" body=""
	I1009 19:08:44.431924  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:44.432325  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:44.432415  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:44.931136  161014 type.go:168] "Request Body" body=""
	I1009 19:08:44.931245  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:44.931664  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:45.431554  161014 type.go:168] "Request Body" body=""
	I1009 19:08:45.431649  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:45.432042  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:45.931929  161014 type.go:168] "Request Body" body=""
	I1009 19:08:45.932032  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:45.932456  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:46.431215  161014 type.go:168] "Request Body" body=""
	I1009 19:08:46.431303  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:46.431675  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:46.931516  161014 type.go:168] "Request Body" body=""
	I1009 19:08:46.931612  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:46.932033  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:46.932105  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:47.431930  161014 type.go:168] "Request Body" body=""
	I1009 19:08:47.432024  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:47.432404  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:47.931253  161014 type.go:168] "Request Body" body=""
	I1009 19:08:47.931351  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:47.931772  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:48.431679  161014 type.go:168] "Request Body" body=""
	I1009 19:08:48.431767  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:48.432147  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:48.930986  161014 type.go:168] "Request Body" body=""
	I1009 19:08:48.931073  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:48.931466  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:49.431246  161014 type.go:168] "Request Body" body=""
	I1009 19:08:49.431332  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:49.431709  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:49.431791  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:49.931583  161014 type.go:168] "Request Body" body=""
	I1009 19:08:49.931665  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:49.932043  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:50.430854  161014 type.go:168] "Request Body" body=""
	I1009 19:08:50.430942  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:50.431310  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:50.931059  161014 type.go:168] "Request Body" body=""
	I1009 19:08:50.931138  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:50.931534  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:51.431317  161014 type.go:168] "Request Body" body=""
	I1009 19:08:51.431423  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:51.431783  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:51.431860  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:51.931678  161014 type.go:168] "Request Body" body=""
	I1009 19:08:51.931770  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:51.932161  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:52.430940  161014 type.go:168] "Request Body" body=""
	I1009 19:08:52.431043  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:52.431471  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:52.931234  161014 type.go:168] "Request Body" body=""
	I1009 19:08:52.931317  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:52.931697  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:53.431539  161014 type.go:168] "Request Body" body=""
	I1009 19:08:53.431626  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:53.432040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:53.432105  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:53.931898  161014 type.go:168] "Request Body" body=""
	I1009 19:08:53.931980  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:53.932334  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:54.431123  161014 type.go:168] "Request Body" body=""
	I1009 19:08:54.431206  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:54.431572  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:54.931007  161014 type.go:168] "Request Body" body=""
	I1009 19:08:54.931094  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:54.931473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:55.431255  161014 type.go:168] "Request Body" body=""
	I1009 19:08:55.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:55.431719  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:55.931595  161014 type.go:168] "Request Body" body=""
	I1009 19:08:55.931684  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:55.932059  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:55.932132  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:56.430905  161014 type.go:168] "Request Body" body=""
	I1009 19:08:56.430996  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:56.431358  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:56.931139  161014 type.go:168] "Request Body" body=""
	I1009 19:08:56.931225  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:56.931614  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:57.431422  161014 type.go:168] "Request Body" body=""
	I1009 19:08:57.431520  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:57.431890  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:57.931717  161014 type.go:168] "Request Body" body=""
	I1009 19:08:57.931804  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:57.932243  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:57.932309  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:58.431442  161014 type.go:168] "Request Body" body=""
	I1009 19:08:58.431719  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:58.432305  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:58.931643  161014 type.go:168] "Request Body" body=""
	I1009 19:08:58.931736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:58.932089  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:59.431793  161014 type.go:168] "Request Body" body=""
	I1009 19:08:59.431868  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:59.432216  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:59.931889  161014 type.go:168] "Request Body" body=""
	I1009 19:08:59.931982  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:59.932322  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:59.932430  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:00.430938  161014 type.go:168] "Request Body" body=""
	I1009 19:09:00.431025  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:00.431413  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:00.930953  161014 type.go:168] "Request Body" body=""
	I1009 19:09:00.931042  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:00.931443  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:01.431021  161014 type.go:168] "Request Body" body=""
	I1009 19:09:01.431114  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:01.431513  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:01.931074  161014 type.go:168] "Request Body" body=""
	I1009 19:09:01.931154  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:01.931545  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:02.431340  161014 type.go:168] "Request Body" body=""
	I1009 19:09:02.431449  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:02.431830  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:02.431902  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:02.931823  161014 type.go:168] "Request Body" body=""
	I1009 19:09:02.931913  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:02.932314  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:03.431114  161014 type.go:168] "Request Body" body=""
	I1009 19:09:03.431193  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:03.431578  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:03.931464  161014 type.go:168] "Request Body" body=""
	I1009 19:09:03.931552  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:03.931986  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:04.431831  161014 type.go:168] "Request Body" body=""
	I1009 19:09:04.431934  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:04.432314  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:04.432398  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:04.931129  161014 type.go:168] "Request Body" body=""
	I1009 19:09:04.931216  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:04.931674  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:05.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:09:05.431611  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:05.432021  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:05.931854  161014 type.go:168] "Request Body" body=""
	I1009 19:09:05.931940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:05.932334  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:06.431088  161014 type.go:168] "Request Body" body=""
	I1009 19:09:06.431167  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:06.431515  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:06.931278  161014 type.go:168] "Request Body" body=""
	I1009 19:09:06.931362  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:06.931748  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:06.931816  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:07.431644  161014 type.go:168] "Request Body" body=""
	I1009 19:09:07.431747  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:07.432178  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:07.931866  161014 type.go:168] "Request Body" body=""
	I1009 19:09:07.931950  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:07.932290  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:08.431090  161014 type.go:168] "Request Body" body=""
	I1009 19:09:08.431172  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:08.431650  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:08.931429  161014 type.go:168] "Request Body" body=""
	I1009 19:09:08.931507  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:08.931843  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:08.931909  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:09.431805  161014 type.go:168] "Request Body" body=""
	I1009 19:09:09.431897  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:09.432328  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:09.931113  161014 type.go:168] "Request Body" body=""
	I1009 19:09:09.931194  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:09.931569  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:10.431340  161014 type.go:168] "Request Body" body=""
	I1009 19:09:10.431473  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:10.431864  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:10.931696  161014 type.go:168] "Request Body" body=""
	I1009 19:09:10.931778  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:10.932062  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:10.932116  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:11.430855  161014 type.go:168] "Request Body" body=""
	I1009 19:09:11.430938  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:11.431371  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:11.931153  161014 type.go:168] "Request Body" body=""
	I1009 19:09:11.931230  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:11.931601  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:12.431453  161014 type.go:168] "Request Body" body=""
	I1009 19:09:12.431539  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:12.431968  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:12.931803  161014 type.go:168] "Request Body" body=""
	I1009 19:09:12.931890  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:12.932230  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:12.932299  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:13.431049  161014 type.go:168] "Request Body" body=""
	I1009 19:09:13.431141  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:13.431581  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:13.931422  161014 type.go:168] "Request Body" body=""
	I1009 19:09:13.931504  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:13.931849  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:14.431710  161014 type.go:168] "Request Body" body=""
	I1009 19:09:14.431803  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:14.432200  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:14.930978  161014 type.go:168] "Request Body" body=""
	I1009 19:09:14.931058  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:14.931421  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:15.431205  161014 type.go:168] "Request Body" body=""
	I1009 19:09:15.431290  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:15.431792  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:15.431868  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:15.931738  161014 type.go:168] "Request Body" body=""
	I1009 19:09:15.931822  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:15.932171  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:16.430949  161014 type.go:168] "Request Body" body=""
	I1009 19:09:16.431033  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:16.431370  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:16.931168  161014 type.go:168] "Request Body" body=""
	I1009 19:09:16.931244  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:16.931635  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:17.431446  161014 type.go:168] "Request Body" body=""
	I1009 19:09:17.431534  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:17.431909  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:17.431982  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:17.931495  161014 type.go:168] "Request Body" body=""
	I1009 19:09:17.931580  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:17.931927  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:18.431744  161014 type.go:168] "Request Body" body=""
	I1009 19:09:18.431828  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:18.432200  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:18.931151  161014 type.go:168] "Request Body" body=""
	I1009 19:09:18.931250  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:18.931652  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:19.431441  161014 type.go:168] "Request Body" body=""
	I1009 19:09:19.431529  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:19.431984  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:19.432070  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:19.931848  161014 type.go:168] "Request Body" body=""
	I1009 19:09:19.931941  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:19.932309  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:20.431088  161014 type.go:168] "Request Body" body=""
	I1009 19:09:20.431164  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:20.431555  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:20.931352  161014 type.go:168] "Request Body" body=""
	I1009 19:09:20.931455  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:20.931826  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:21.431728  161014 type.go:168] "Request Body" body=""
	I1009 19:09:21.431814  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:21.432175  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:21.432242  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:21.930958  161014 type.go:168] "Request Body" body=""
	I1009 19:09:21.931034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:21.931435  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:22.431185  161014 type.go:168] "Request Body" body=""
	I1009 19:09:22.431270  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:22.431704  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:22.931192  161014 type.go:168] "Request Body" body=""
	I1009 19:09:22.931273  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:22.931689  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:23.431502  161014 type.go:168] "Request Body" body=""
	I1009 19:09:23.431580  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:23.431996  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:23.930860  161014 type.go:168] "Request Body" body=""
	I1009 19:09:23.930955  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:23.931370  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:23.931478  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:24.431207  161014 type.go:168] "Request Body" body=""
	I1009 19:09:24.431286  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:24.431650  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:24.931519  161014 type.go:168] "Request Body" body=""
	I1009 19:09:24.931614  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:24.931998  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:25.431913  161014 type.go:168] "Request Body" body=""
	I1009 19:09:25.432001  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:25.432369  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:25.931194  161014 type.go:168] "Request Body" body=""
	I1009 19:09:25.931275  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:25.931711  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:25.931786  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:26.431609  161014 type.go:168] "Request Body" body=""
	I1009 19:09:26.431690  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:26.432050  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:26.931918  161014 type.go:168] "Request Body" body=""
	I1009 19:09:26.932020  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:26.932417  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:27.431187  161014 type.go:168] "Request Body" body=""
	I1009 19:09:27.431268  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:27.431666  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:27.931530  161014 type.go:168] "Request Body" body=""
	I1009 19:09:27.931614  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:27.931987  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:27.932055  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:28.431844  161014 type.go:168] "Request Body" body=""
	I1009 19:09:28.431933  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:28.432359  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:28.931165  161014 type.go:168] "Request Body" body=""
	I1009 19:09:28.931247  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:28.931680  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:29.431569  161014 type.go:168] "Request Body" body=""
	I1009 19:09:29.431650  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:29.432040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:29.931942  161014 type.go:168] "Request Body" body=""
	I1009 19:09:29.932027  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:29.932374  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:29.932460  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:30.431194  161014 type.go:168] "Request Body" body=""
	I1009 19:09:30.431282  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:30.431737  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:30.931616  161014 type.go:168] "Request Body" body=""
	I1009 19:09:30.931725  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:30.932121  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:31.430987  161014 type.go:168] "Request Body" body=""
	I1009 19:09:31.431078  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:31.431478  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:31.931232  161014 type.go:168] "Request Body" body=""
	I1009 19:09:31.931315  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:31.931680  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:32.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:09:32.431613  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:32.431992  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:32.432063  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:32.931853  161014 type.go:168] "Request Body" body=""
	I1009 19:09:32.931950  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:32.932297  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:33.431044  161014 type.go:168] "Request Body" body=""
	I1009 19:09:33.431132  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:33.431543  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:33.931355  161014 type.go:168] "Request Body" body=""
	I1009 19:09:33.931458  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:33.931818  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:34.431650  161014 type.go:168] "Request Body" body=""
	I1009 19:09:34.431733  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:34.432148  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:34.432213  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:34.930967  161014 type.go:168] "Request Body" body=""
	I1009 19:09:34.931063  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:34.931473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:35.431283  161014 type.go:168] "Request Body" body=""
	I1009 19:09:35.431373  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:35.431779  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:35.931628  161014 type.go:168] "Request Body" body=""
	I1009 19:09:35.931736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:35.932084  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:36.430910  161014 type.go:168] "Request Body" body=""
	I1009 19:09:36.431012  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:36.431444  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:36.931340  161014 type.go:168] "Request Body" body=""
	I1009 19:09:36.931451  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:36.931825  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:36.931893  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:37.431740  161014 type.go:168] "Request Body" body=""
	I1009 19:09:37.431822  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:37.432174  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:37.931117  161014 type.go:168] "Request Body" body=""
	I1009 19:09:37.931218  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:37.931587  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:38.431359  161014 type.go:168] "Request Body" body=""
	I1009 19:09:38.431467  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:38.431870  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:38.931821  161014 type.go:168] "Request Body" body=""
	I1009 19:09:38.931902  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:38.932265  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:38.932358  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:39.431099  161014 type.go:168] "Request Body" body=""
	I1009 19:09:39.431179  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:39.431570  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:39.931428  161014 type.go:168] "Request Body" body=""
	I1009 19:09:39.931517  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:39.931883  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:40.431747  161014 type.go:168] "Request Body" body=""
	I1009 19:09:40.431830  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:40.432201  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:40.931026  161014 type.go:168] "Request Body" body=""
	I1009 19:09:40.931135  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:40.931594  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:41.431370  161014 type.go:168] "Request Body" body=""
	I1009 19:09:41.431476  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:41.431863  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:41.431951  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:41.931795  161014 type.go:168] "Request Body" body=""
	I1009 19:09:41.931873  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:41.932227  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:42.431026  161014 type.go:168] "Request Body" body=""
	I1009 19:09:42.431112  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:42.431474  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:42.931233  161014 type.go:168] "Request Body" body=""
	I1009 19:09:42.931308  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:42.931720  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:43.431625  161014 type.go:168] "Request Body" body=""
	I1009 19:09:43.431708  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:43.432076  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:43.432152  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:43.930876  161014 type.go:168] "Request Body" body=""
	I1009 19:09:43.930965  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:43.931363  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:44.431159  161014 type.go:168] "Request Body" body=""
	I1009 19:09:44.431252  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:44.431660  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:44.931539  161014 type.go:168] "Request Body" body=""
	I1009 19:09:44.931619  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:44.932022  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:45.431848  161014 type.go:168] "Request Body" body=""
	I1009 19:09:45.431928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:45.432294  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:45.432362  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:45.931071  161014 type.go:168] "Request Body" body=""
	I1009 19:09:45.931154  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:45.931550  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:46.431330  161014 type.go:168] "Request Body" body=""
	I1009 19:09:46.431433  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:46.431785  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:46.931618  161014 type.go:168] "Request Body" body=""
	I1009 19:09:46.931717  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:46.932083  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:47.430878  161014 type.go:168] "Request Body" body=""
	I1009 19:09:47.430967  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:47.431308  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:47.931113  161014 type.go:168] "Request Body" body=""
	I1009 19:09:47.931193  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:47.931575  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:47.931645  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:48.431350  161014 type.go:168] "Request Body" body=""
	I1009 19:09:48.431448  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:48.431909  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:48.931846  161014 type.go:168] "Request Body" body=""
	I1009 19:09:48.931928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:48.932292  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:49.431050  161014 type.go:168] "Request Body" body=""
	I1009 19:09:49.431125  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:49.431508  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:49.931265  161014 type.go:168] "Request Body" body=""
	I1009 19:09:49.931345  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:49.931748  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:49.931814  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:50.431652  161014 type.go:168] "Request Body" body=""
	I1009 19:09:50.431747  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:50.432126  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:50.930878  161014 type.go:168] "Request Body" body=""
	I1009 19:09:50.930959  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:50.931336  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:51.431163  161014 type.go:168] "Request Body" body=""
	I1009 19:09:51.431258  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:51.431622  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:51.931358  161014 type.go:168] "Request Body" body=""
	I1009 19:09:51.931464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:51.931852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:51.931924  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:52.431703  161014 type.go:168] "Request Body" body=""
	I1009 19:09:52.431795  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:52.432179  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:52.930954  161014 type.go:168] "Request Body" body=""
	I1009 19:09:52.931050  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:52.931459  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:53.431224  161014 type.go:168] "Request Body" body=""
	I1009 19:09:53.431365  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:53.431740  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:53.931748  161014 type.go:168] "Request Body" body=""
	I1009 19:09:53.931831  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:53.932191  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:53.932260  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:54.430975  161014 type.go:168] "Request Body" body=""
	I1009 19:09:54.431053  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:54.431476  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:54.931249  161014 type.go:168] "Request Body" body=""
	I1009 19:09:54.931341  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:54.931729  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:55.431607  161014 type.go:168] "Request Body" body=""
	I1009 19:09:55.431691  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:55.432110  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:55.930917  161014 type.go:168] "Request Body" body=""
	I1009 19:09:55.931003  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:55.931362  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:56.431145  161014 type.go:168] "Request Body" body=""
	I1009 19:09:56.431222  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:56.431639  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:56.431710  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:56.931556  161014 type.go:168] "Request Body" body=""
	I1009 19:09:56.931656  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:56.932062  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:57.431903  161014 type.go:168] "Request Body" body=""
	I1009 19:09:57.431989  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:57.432337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:57.931402  161014 type.go:168] "Request Body" body=""
	I1009 19:09:57.931482  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:57.931843  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:58.431701  161014 type.go:168] "Request Body" body=""
	I1009 19:09:58.431790  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:58.432145  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:58.432218  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:58.931088  161014 type.go:168] "Request Body" body=""
	I1009 19:09:58.931175  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:58.931505  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:59.431298  161014 type.go:168] "Request Body" body=""
	I1009 19:09:59.431395  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:59.431751  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:59.931602  161014 type.go:168] "Request Body" body=""
	I1009 19:09:59.931702  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:59.932051  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:00.430856  161014 type.go:168] "Request Body" body=""
	I1009 19:10:00.430958  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:00.431337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:00.931121  161014 type.go:168] "Request Body" body=""
	I1009 19:10:00.931220  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:00.931593  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:00.931674  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:01.431423  161014 type.go:168] "Request Body" body=""
	I1009 19:10:01.431509  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:01.431863  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:01.931614  161014 type.go:168] "Request Body" body=""
	I1009 19:10:01.931705  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:01.932079  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:02.430870  161014 type.go:168] "Request Body" body=""
	I1009 19:10:02.430952  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:02.431333  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:02.931135  161014 type.go:168] "Request Body" body=""
	I1009 19:10:02.931235  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:02.931629  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:02.931714  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:03.431520  161014 type.go:168] "Request Body" body=""
	I1009 19:10:03.431673  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:03.432032  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:03.930864  161014 type.go:168] "Request Body" body=""
	I1009 19:10:03.930947  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:03.931344  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:04.431204  161014 type.go:168] "Request Body" body=""
	I1009 19:10:04.431296  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:04.431704  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:04.931600  161014 type.go:168] "Request Body" body=""
	I1009 19:10:04.931678  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:04.932040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:04.932106  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:05.430899  161014 type.go:168] "Request Body" body=""
	I1009 19:10:05.431003  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:05.431407  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:05.931195  161014 type.go:168] "Request Body" body=""
	I1009 19:10:05.931270  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:05.931635  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:06.431451  161014 type.go:168] "Request Body" body=""
	I1009 19:10:06.431534  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:06.431953  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:06.931837  161014 type.go:168] "Request Body" body=""
	I1009 19:10:06.931927  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:06.932279  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:06.932345  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:07.431076  161014 type.go:168] "Request Body" body=""
	I1009 19:10:07.431164  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:07.431506  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:07.931394  161014 type.go:168] "Request Body" body=""
	I1009 19:10:07.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:07.931835  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:08.431660  161014 type.go:168] "Request Body" body=""
	I1009 19:10:08.431741  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:08.432102  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:08.930920  161014 type.go:168] "Request Body" body=""
	I1009 19:10:08.930998  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:08.931426  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:09.431179  161014 type.go:168] "Request Body" body=""
	I1009 19:10:09.431260  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:09.431640  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:09.431713  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:09.931551  161014 type.go:168] "Request Body" body=""
	I1009 19:10:09.931636  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:09.932085  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:10.430911  161014 type.go:168] "Request Body" body=""
	I1009 19:10:10.431004  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:10.431408  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:10.931180  161014 type.go:168] "Request Body" body=""
	I1009 19:10:10.931260  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:10.931651  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:11.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:10:11.431610  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:11.432017  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:11.432093  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:11.930846  161014 type.go:168] "Request Body" body=""
	I1009 19:10:11.930928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:11.931300  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:12.431099  161014 type.go:168] "Request Body" body=""
	I1009 19:10:12.431188  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:12.431615  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:12.931577  161014 type.go:168] "Request Body" body=""
	I1009 19:10:12.931661  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:12.932029  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:13.431910  161014 type.go:168] "Request Body" body=""
	I1009 19:10:13.431995  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:13.432350  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:13.432438  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:13.931217  161014 type.go:168] "Request Body" body=""
	I1009 19:10:13.931302  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:13.931678  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:14.431548  161014 type.go:168] "Request Body" body=""
	I1009 19:10:14.431638  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:14.432110  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:14.930876  161014 type.go:168] "Request Body" body=""
	I1009 19:10:14.930963  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:14.931343  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:15.431156  161014 type.go:168] "Request Body" body=""
	I1009 19:10:15.431244  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:15.431618  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:15.931358  161014 type.go:168] "Request Body" body=""
	I1009 19:10:15.931451  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:15.931817  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:15.931883  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:16.431696  161014 type.go:168] "Request Body" body=""
	I1009 19:10:16.431794  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:16.432158  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:16.930930  161014 type.go:168] "Request Body" body=""
	I1009 19:10:16.931010  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:16.931370  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:17.431200  161014 type.go:168] "Request Body" body=""
	I1009 19:10:17.431290  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:17.431663  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:17.931525  161014 type.go:168] "Request Body" body=""
	I1009 19:10:17.931613  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:17.932012  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:17.932077  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:18.431980  161014 type.go:168] "Request Body" body=""
	I1009 19:10:18.432065  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:18.432498  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:18.931327  161014 type.go:168] "Request Body" body=""
	I1009 19:10:18.931435  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:18.931798  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:19.431649  161014 type.go:168] "Request Body" body=""
	I1009 19:10:19.431736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:19.432133  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:19.930941  161014 type.go:168] "Request Body" body=""
	I1009 19:10:19.931034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:19.931426  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:20.431191  161014 type.go:168] "Request Body" body=""
	I1009 19:10:20.431277  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:20.431702  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:20.431786  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:20.931649  161014 type.go:168] "Request Body" body=""
	I1009 19:10:20.931743  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:20.932145  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:21.430998  161014 type.go:168] "Request Body" body=""
	I1009 19:10:21.431093  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:21.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:21.931294  161014 type.go:168] "Request Body" body=""
	I1009 19:10:21.931404  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:21.931769  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:22.431592  161014 type.go:168] "Request Body" body=""
	I1009 19:10:22.431689  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:22.432061  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:22.432138  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:22.930890  161014 type.go:168] "Request Body" body=""
	I1009 19:10:22.930981  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:22.931355  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:23.431123  161014 type.go:168] "Request Body" body=""
	I1009 19:10:23.431202  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:23.431562  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:23.931393  161014 type.go:168] "Request Body" body=""
	I1009 19:10:23.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:23.931849  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:24.431681  161014 type.go:168] "Request Body" body=""
	I1009 19:10:24.431765  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:24.432120  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:24.432200  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:24.930948  161014 type.go:168] "Request Body" body=""
	I1009 19:10:24.931038  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:24.931411  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:25.431172  161014 type.go:168] "Request Body" body=""
	I1009 19:10:25.431263  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:25.431645  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:25.931517  161014 type.go:168] "Request Body" body=""
	I1009 19:10:25.931604  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:25.931950  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:26.431795  161014 type.go:168] "Request Body" body=""
	I1009 19:10:26.431877  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:26.432259  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:26.432327  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:26.931108  161014 type.go:168] "Request Body" body=""
	I1009 19:10:26.931192  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:26.931561  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:27.431372  161014 type.go:168] "Request Body" body=""
	I1009 19:10:27.431478  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:27.431852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:27.931767  161014 type.go:168] "Request Body" body=""
	I1009 19:10:27.931844  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:27.932243  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:28.431036  161014 type.go:168] "Request Body" body=""
	I1009 19:10:28.431114  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:28.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:28.931317  161014 type.go:168] "Request Body" body=""
	I1009 19:10:28.931415  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:28.931802  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:28.931870  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:29.431682  161014 type.go:168] "Request Body" body=""
	I1009 19:10:29.431764  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:29.432158  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:29.930948  161014 type.go:168] "Request Body" body=""
	I1009 19:10:29.931029  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:29.931432  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:30.431237  161014 type.go:168] "Request Body" body=""
	I1009 19:10:30.431318  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:30.431700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:30.931592  161014 type.go:168] "Request Body" body=""
	I1009 19:10:30.931686  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:30.932066  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:30.932138  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:31.430865  161014 type.go:168] "Request Body" body=""
	I1009 19:10:31.430944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:31.431326  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:31.931100  161014 type.go:168] "Request Body" body=""
	I1009 19:10:31.931183  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:31.931557  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:32.431408  161014 type.go:168] "Request Body" body=""
	I1009 19:10:32.431492  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:32.431860  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:32.931727  161014 type.go:168] "Request Body" body=""
	I1009 19:10:32.931827  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:32.932201  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:32.932275  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:33.431035  161014 type.go:168] "Request Body" body=""
	I1009 19:10:33.431127  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:33.431509  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:33.931347  161014 type.go:168] "Request Body" body=""
	I1009 19:10:33.931452  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:33.931805  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:34.431659  161014 type.go:168] "Request Body" body=""
	I1009 19:10:34.431767  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:34.432157  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:34.930935  161014 type.go:168] "Request Body" body=""
	I1009 19:10:34.931032  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:34.931422  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:35.431188  161014 type.go:168] "Request Body" body=""
	I1009 19:10:35.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:35.431638  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:35.431700  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:35.931496  161014 type.go:168] "Request Body" body=""
	I1009 19:10:35.931583  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:35.931982  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:36.431843  161014 type.go:168] "Request Body" body=""
	I1009 19:10:36.431930  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:36.432287  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:36.931012  161014 type.go:168] "Request Body" body=""
	I1009 19:10:36.931101  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:36.931479  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:37.431254  161014 type.go:168] "Request Body" body=""
	I1009 19:10:37.431336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:37.431708  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:37.431785  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:37.931498  161014 type.go:168] "Request Body" body=""
	I1009 19:10:37.931578  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:37.931952  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:38.431802  161014 type.go:168] "Request Body" body=""
	I1009 19:10:38.431878  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:38.432242  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:38.931094  161014 type.go:168] "Request Body" body=""
	I1009 19:10:38.931171  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:38.931535  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:39.431342  161014 type.go:168] "Request Body" body=""
	I1009 19:10:39.431467  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:39.431828  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:39.431894  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:39.931678  161014 type.go:168] "Request Body" body=""
	I1009 19:10:39.931769  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:39.932114  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:40.430894  161014 type.go:168] "Request Body" body=""
	I1009 19:10:40.431002  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:40.431338  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:40.931086  161014 type.go:168] "Request Body" body=""
	I1009 19:10:40.931169  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:40.931549  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:41.431354  161014 type.go:168] "Request Body" body=""
	I1009 19:10:41.431484  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:41.431936  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:41.432009  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:41.931856  161014 type.go:168] "Request Body" body=""
	I1009 19:10:41.931944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:41.932342  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:42.431239  161014 type.go:168] "Request Body" body=""
	I1009 19:10:42.431343  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:42.431776  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:42.931642  161014 type.go:168] "Request Body" body=""
	I1009 19:10:42.931724  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:42.932139  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:43.430955  161014 type.go:168] "Request Body" body=""
	I1009 19:10:43.431055  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:43.431482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:43.931286  161014 type.go:168] "Request Body" body=""
	I1009 19:10:43.931364  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:43.931761  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:43.931841  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:44.431651  161014 type.go:168] "Request Body" body=""
	I1009 19:10:44.431739  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:44.432136  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:44.930918  161014 type.go:168] "Request Body" body=""
	I1009 19:10:44.930997  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:44.931368  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:45.431210  161014 type.go:168] "Request Body" body=""
	I1009 19:10:45.431301  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:45.431803  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:45.931785  161014 type.go:168] "Request Body" body=""
	I1009 19:10:45.931879  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:45.932234  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:45.932298  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:46.431044  161014 type.go:168] "Request Body" body=""
	I1009 19:10:46.431130  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:46.431509  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:46.931298  161014 type.go:168] "Request Body" body=""
	I1009 19:10:46.931409  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:46.931768  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:47.431684  161014 type.go:168] "Request Body" body=""
	I1009 19:10:47.431772  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:47.432192  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:47.930892  161014 type.go:168] "Request Body" body=""
	I1009 19:10:47.931082  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:47.931491  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:48.431254  161014 type.go:168] "Request Body" body=""
	I1009 19:10:48.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:48.431754  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:48.431817  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:48.931519  161014 type.go:168] "Request Body" body=""
	I1009 19:10:48.931605  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:48.931963  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:49.431903  161014 type.go:168] "Request Body" body=""
	I1009 19:10:49.431995  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:49.432442  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:49.931216  161014 type.go:168] "Request Body" body=""
	I1009 19:10:49.931299  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:49.931685  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:50.431513  161014 type.go:168] "Request Body" body=""
	I1009 19:10:50.431600  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:50.432015  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:50.432094  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:50.931903  161014 type.go:168] "Request Body" body=""
	I1009 19:10:50.931985  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:50.932356  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:51.431151  161014 type.go:168] "Request Body" body=""
	I1009 19:10:51.431235  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:51.431691  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:51.931607  161014 type.go:168] "Request Body" body=""
	I1009 19:10:51.931704  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:51.932066  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:52.430855  161014 type.go:168] "Request Body" body=""
	I1009 19:10:52.430936  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:52.431352  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:52.931144  161014 type.go:168] "Request Body" body=""
	I1009 19:10:52.931236  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:52.931629  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:52.931694  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:53.431504  161014 type.go:168] "Request Body" body=""
	I1009 19:10:53.431592  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:53.431978  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:53.930879  161014 type.go:168] "Request Body" body=""
	I1009 19:10:53.930990  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:53.931420  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:54.431176  161014 type.go:168] "Request Body" body=""
	I1009 19:10:54.431256  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:54.431696  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:54.931517  161014 type.go:168] "Request Body" body=""
	I1009 19:10:54.931600  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:54.932006  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:54.932070  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:55.431919  161014 type.go:168] "Request Body" body=""
	I1009 19:10:55.432013  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:55.432499  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:55.931252  161014 type.go:168] "Request Body" body=""
	I1009 19:10:55.931340  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:55.931770  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:56.431601  161014 type.go:168] "Request Body" body=""
	I1009 19:10:56.431705  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:56.432063  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:56.930857  161014 type.go:168] "Request Body" body=""
	I1009 19:10:56.930944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:56.931308  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:57.431063  161014 type.go:168] "Request Body" body=""
	I1009 19:10:57.431152  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:57.431557  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:57.431627  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:57.931435  161014 type.go:168] "Request Body" body=""
	I1009 19:10:57.931520  161014 node_ready.go:38] duration metric: took 6m0.000788191s for node "functional-158523" to be "Ready" ...
	I1009 19:10:57.934316  161014 out.go:203] 
	W1009 19:10:57.935818  161014 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:10:57.935834  161014 out.go:285] * 
	W1009 19:10:57.937485  161014 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:10:57.938875  161014 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:10:51 functional-158523 crio[2962]: time="2025-10-09T19:10:51.642256781Z" level=info msg="createCtr: removing container 2490de6b39f748d402af3495e43fc05576eccbab98ebd5bbfdff943d4e40f275" id=da0b9f6c-a2b3-428e-9d4b-fa205d5f27f7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:51 functional-158523 crio[2962]: time="2025-10-09T19:10:51.642289335Z" level=info msg="createCtr: deleting container 2490de6b39f748d402af3495e43fc05576eccbab98ebd5bbfdff943d4e40f275 from storage" id=da0b9f6c-a2b3-428e-9d4b-fa205d5f27f7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:51 functional-158523 crio[2962]: time="2025-10-09T19:10:51.644687569Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-158523_kube-system_08a2248bbc397b9ed927890e2073120b_0" id=da0b9f6c-a2b3-428e-9d4b-fa205d5f27f7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.619676195Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=c07a27ea-6a5f-460c-a647-b32add76a687 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.620620769Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=2f7e3b10-f004-4065-9292-3fe1f8b15f45 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.621576231Z" level=info msg="Creating container: kube-system/etcd-functional-158523/etcd" id=d3bb353b-5939-49f6-9490-9518c0763165 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.621810647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.624939514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.625350062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.640320483Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d3bb353b-5939-49f6-9490-9518c0763165 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.641756335Z" level=info msg="createCtr: deleting container ID b74958fde2f0fcf56a952a9f8f9e70895129cc6d7951fe9eb8b5202c0a41081b from idIndex" id=d3bb353b-5939-49f6-9490-9518c0763165 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.641796746Z" level=info msg="createCtr: removing container b74958fde2f0fcf56a952a9f8f9e70895129cc6d7951fe9eb8b5202c0a41081b" id=d3bb353b-5939-49f6-9490-9518c0763165 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.641836923Z" level=info msg="createCtr: deleting container b74958fde2f0fcf56a952a9f8f9e70895129cc6d7951fe9eb8b5202c0a41081b from storage" id=d3bb353b-5939-49f6-9490-9518c0763165 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:53 functional-158523 crio[2962]: time="2025-10-09T19:10:53.644007781Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-158523_kube-system_8f4f9df5924bbaa4e1ec7f60e6576647_0" id=d3bb353b-5939-49f6-9490-9518c0763165 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.61882509Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=9e80cc50-ec3a-4aab-92b0-554cd819b949 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.619862164Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b0596507-769f-4dd5-9ef6-c22f860962cd name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.620798167Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-158523/kube-apiserver" id=6a2a7b62-51c2-4e09-bb0d-30b3eebccef2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.621043823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.624221756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.624620837Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.639591129Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6a2a7b62-51c2-4e09-bb0d-30b3eebccef2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.64106485Z" level=info msg="createCtr: deleting container ID 598cca6b8fe371ebc8cdb0f104a2eef6aa85302a4515ee5a3d9d0ad354a967f4 from idIndex" id=6a2a7b62-51c2-4e09-bb0d-30b3eebccef2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.641107944Z" level=info msg="createCtr: removing container 598cca6b8fe371ebc8cdb0f104a2eef6aa85302a4515ee5a3d9d0ad354a967f4" id=6a2a7b62-51c2-4e09-bb0d-30b3eebccef2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.641141296Z" level=info msg="createCtr: deleting container 598cca6b8fe371ebc8cdb0f104a2eef6aa85302a4515ee5a3d9d0ad354a967f4 from storage" id=6a2a7b62-51c2-4e09-bb0d-30b3eebccef2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:10:56 functional-158523 crio[2962]: time="2025-10-09T19:10:56.643279067Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-158523_kube-system_bbd906eec6f9b7c1a1a340fc9a9fdcd1_0" id=6a2a7b62-51c2-4e09-bb0d-30b3eebccef2 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:11:01.960721    4531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:11:01.961283    4531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:11:01.962809    4531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:11:01.963285    4531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:11:01.964903    4531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:11:01 up 53 min,  0 user,  load average: 0.00, 0.13, 9.35
	Linux functional-158523 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:10:52 functional-158523 kubelet[1810]: I1009 19:10:52.516174    1810 kubelet_node_status.go:75] "Attempting to register node" node="functional-158523"
	Oct 09 19:10:52 functional-158523 kubelet[1810]: E1009 19:10:52.516602    1810 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-158523"
	Oct 09 19:10:53 functional-158523 kubelet[1810]: E1009 19:10:53.619138    1810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:10:53 functional-158523 kubelet[1810]: E1009 19:10:53.644430    1810 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:10:53 functional-158523 kubelet[1810]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:10:53 functional-158523 kubelet[1810]:  > podSandboxID="c5f59cf39316c74dd65d2925d309cbd6e6fdc48c022b61803b3c6d8d973e588c"
	Oct 09 19:10:53 functional-158523 kubelet[1810]: E1009 19:10:53.644560    1810 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:10:53 functional-158523 kubelet[1810]:         container etcd start failed in pod etcd-functional-158523_kube-system(8f4f9df5924bbaa4e1ec7f60e6576647): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:10:53 functional-158523 kubelet[1810]:  > logger="UnhandledError"
	Oct 09 19:10:53 functional-158523 kubelet[1810]: E1009 19:10:53.644607    1810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-158523" podUID="8f4f9df5924bbaa4e1ec7f60e6576647"
	Oct 09 19:10:56 functional-158523 kubelet[1810]: E1009 19:10:56.592982    1810 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-158523.186ce7d3e1d25377\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-158523.186ce7d3e1d25377  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-158523,UID:functional-158523,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-158523 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-158523,},FirstTimestamp:2025-10-09 19:00:51.607794551 +0000 UTC m=+0.591054211,LastTimestamp:2025-10-09 19:00:51.609818572 +0000 UTC m=+0.593078239,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-158523,}"
	Oct 09 19:10:56 functional-158523 kubelet[1810]: E1009 19:10:56.618286    1810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:10:56 functional-158523 kubelet[1810]: E1009 19:10:56.643583    1810 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:10:56 functional-158523 kubelet[1810]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:10:56 functional-158523 kubelet[1810]:  > podSandboxID="e6a4bc1b2df9d751888af8288e7c4c569afb0335567fe2f74c173dbe4e47f513"
	Oct 09 19:10:56 functional-158523 kubelet[1810]: E1009 19:10:56.643725    1810 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:10:56 functional-158523 kubelet[1810]:         container kube-apiserver start failed in pod kube-apiserver-functional-158523_kube-system(bbd906eec6f9b7c1a1a340fc9a9fdcd1): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:10:56 functional-158523 kubelet[1810]:  > logger="UnhandledError"
	Oct 09 19:10:56 functional-158523 kubelet[1810]: E1009 19:10:56.643761    1810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-158523" podUID="bbd906eec6f9b7c1a1a340fc9a9fdcd1"
	Oct 09 19:10:59 functional-158523 kubelet[1810]: E1009 19:10:59.309466    1810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-158523?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 19:10:59 functional-158523 kubelet[1810]: E1009 19:10:59.406184    1810 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 09 19:10:59 functional-158523 kubelet[1810]: I1009 19:10:59.517992    1810 kubelet_node_status.go:75] "Attempting to register node" node="functional-158523"
	Oct 09 19:10:59 functional-158523 kubelet[1810]: E1009 19:10:59.518407    1810 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-158523"
	Oct 09 19:11:01 functional-158523 kubelet[1810]: E1009 19:11:01.028531    1810 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 19:11:01 functional-158523 kubelet[1810]: E1009 19:11:01.656889    1810 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-158523\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523: exit status 2 (313.344643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-158523" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (2.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 kubectl -- --context functional-158523 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 kubectl -- --context functional-158523 get pods: exit status 1 (101.716477ms)

                                                
                                                
** stderr ** 
	E1009 19:11:09.887512  166494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:11:09.887820  166494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:11:09.888989  166494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:11:09.889257  166494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:11:09.890600  166494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-amd64 -p functional-158523 kubectl -- --context functional-158523 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-158523
helpers_test.go:243: (dbg) docker inspect functional-158523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	        "Created": "2025-10-09T18:56:39.519997973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:56:39.555178961Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hosts",
	        "LogPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4-json.log",
	        "Name": "/functional-158523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-158523:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-158523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	                "LowerDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-158523",
	                "Source": "/var/lib/docker/volumes/functional-158523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-158523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-158523",
	                "name.minikube.sigs.k8s.io": "functional-158523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50f61b8c99f766763e9e30d37584e537ddf10f74215a382a4221cbe7f7c2e821",
	            "SandboxKey": "/var/run/docker/netns/50f61b8c99f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-158523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:a1:34:24:78:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6347eb7b6b2842e266f51d3873c8aa169a4287ea52510c3d62dc4ed41012963c",
	                    "EndpointID": "eb1ada23cbc058d52037dd94574627dd1b0fedf455f291e656f050bf06881952",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-158523",
	                        "dea3735c567c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523: exit status 2 (297.922568ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 logs -n 25
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-656427 --log_dir /tmp/nospam-656427 pause                                                              │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                            │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                            │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                            │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                               │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                               │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                               │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ delete  │ -p nospam-656427                                                                                              │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p functional-158523 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ -p functional-158523 --alsologtostderr -v=8                                                                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ cache   │ functional-158523 cache add registry.k8s.io/pause:3.1                                                         │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache add registry.k8s.io/pause:3.3                                                         │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache add registry.k8s.io/pause:latest                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache add minikube-local-cache-test:functional-158523                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache delete minikube-local-cache-test:functional-158523                                    │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl images                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	│ cache   │ functional-158523 cache reload                                                                                │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ kubectl │ functional-158523 kubectl -- --context functional-158523 get pods                                             │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:04:53
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:04:53.859600  161014 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:04:53.859894  161014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:53.859904  161014 out.go:374] Setting ErrFile to fd 2...
	I1009 19:04:53.859909  161014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:53.860103  161014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:04:53.860622  161014 out.go:368] Setting JSON to false
	I1009 19:04:53.861569  161014 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2843,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:04:53.861680  161014 start.go:143] virtualization: kvm guest
	I1009 19:04:53.864538  161014 out.go:179] * [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:04:53.866020  161014 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:04:53.866041  161014 notify.go:221] Checking for updates...
	I1009 19:04:53.868520  161014 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:04:53.869799  161014 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:53.871001  161014 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:04:53.872350  161014 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:04:53.873695  161014 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:04:53.875515  161014 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:53.875628  161014 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:04:53.899122  161014 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:04:53.899239  161014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:04:53.961702  161014 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:04:53.950772825 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:04:53.961810  161014 docker.go:319] overlay module found
	I1009 19:04:53.963901  161014 out.go:179] * Using the docker driver based on existing profile
	I1009 19:04:53.965359  161014 start.go:309] selected driver: docker
	I1009 19:04:53.965397  161014 start.go:930] validating driver "docker" against &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:04:53.965505  161014 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:04:53.965601  161014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:04:54.024534  161014 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:04:54.014787007 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:04:54.025138  161014 cni.go:84] Creating CNI manager for ""
	I1009 19:04:54.025189  161014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:04:54.025246  161014 start.go:353] cluster config:
	{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:04:54.027519  161014 out.go:179] * Starting "functional-158523" primary control-plane node in "functional-158523" cluster
	I1009 19:04:54.028967  161014 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:04:54.030473  161014 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:04:54.031821  161014 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:04:54.031876  161014 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:04:54.031885  161014 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:04:54.031986  161014 cache.go:58] Caching tarball of preloaded images
	I1009 19:04:54.032085  161014 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:04:54.032098  161014 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:04:54.032213  161014 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/config.json ...
	I1009 19:04:54.053026  161014 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:04:54.053045  161014 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:04:54.053063  161014 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:04:54.053096  161014 start.go:361] acquireMachinesLock for functional-158523: {Name:mk995713bbd40419f859c4a8640c8ada0479020c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:04:54.053186  161014 start.go:365] duration metric: took 46.429µs to acquireMachinesLock for "functional-158523"
	I1009 19:04:54.053209  161014 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:04:54.053220  161014 fix.go:55] fixHost starting: 
	I1009 19:04:54.053511  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:54.070674  161014 fix.go:113] recreateIfNeeded on functional-158523: state=Running err=<nil>
	W1009 19:04:54.070714  161014 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:04:54.072611  161014 out.go:252] * Updating the running docker "functional-158523" container ...
	I1009 19:04:54.072644  161014 machine.go:93] provisionDockerMachine start ...
	I1009 19:04:54.072732  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:54.089158  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:54.089398  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:54.089417  161014 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:04:54.234516  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 19:04:54.234543  161014 ubuntu.go:182] provisioning hostname "functional-158523"
	I1009 19:04:54.234606  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:54.252690  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:54.252942  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:54.252960  161014 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-158523 && echo "functional-158523" | sudo tee /etc/hostname
	I1009 19:04:54.409130  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 19:04:54.409240  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:54.428592  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:54.428819  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:54.428839  161014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-158523' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-158523/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-158523' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:04:54.575221  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:04:54.575248  161014 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:04:54.575298  161014 ubuntu.go:190] setting up certificates
	I1009 19:04:54.575313  161014 provision.go:84] configureAuth start
	I1009 19:04:54.575366  161014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 19:04:54.593157  161014 provision.go:143] copyHostCerts
	I1009 19:04:54.593200  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:04:54.593229  161014 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:04:54.593244  161014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:04:54.593315  161014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:04:54.593491  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:04:54.593517  161014 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:04:54.593524  161014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:04:54.593557  161014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:04:54.593615  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:04:54.593632  161014 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:04:54.593638  161014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:04:54.593693  161014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:04:54.593752  161014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.functional-158523 san=[127.0.0.1 192.168.49.2 functional-158523 localhost minikube]
	I1009 19:04:54.998231  161014 provision.go:177] copyRemoteCerts
	I1009 19:04:54.998297  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:04:54.998335  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.016505  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.120020  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:04:55.120077  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:04:55.138116  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:04:55.138187  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:04:55.157031  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:04:55.157100  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:04:55.176045  161014 provision.go:87] duration metric: took 600.715143ms to configureAuth
	I1009 19:04:55.176080  161014 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:04:55.176245  161014 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:55.176357  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.194450  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:55.194679  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:55.194701  161014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:04:55.467764  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:04:55.467789  161014 machine.go:96] duration metric: took 1.395134259s to provisionDockerMachine
	I1009 19:04:55.467804  161014 start.go:294] postStartSetup for "functional-158523" (driver="docker")
	I1009 19:04:55.467821  161014 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:04:55.467882  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:04:55.467922  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.486353  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.591117  161014 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:04:55.594855  161014 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1009 19:04:55.594886  161014 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1009 19:04:55.594893  161014 command_runner.go:130] > VERSION_ID="12"
	I1009 19:04:55.594900  161014 command_runner.go:130] > VERSION="12 (bookworm)"
	I1009 19:04:55.594907  161014 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1009 19:04:55.594911  161014 command_runner.go:130] > ID=debian
	I1009 19:04:55.594915  161014 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1009 19:04:55.594920  161014 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1009 19:04:55.594926  161014 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1009 19:04:55.594992  161014 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:04:55.595011  161014 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:04:55.595023  161014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:04:55.595090  161014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:04:55.595204  161014 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:04:55.595227  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:04:55.595320  161014 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts -> hosts in /etc/test/nested/copy/141519
	I1009 19:04:55.595330  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts -> /etc/test/nested/copy/141519/hosts
	I1009 19:04:55.595388  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/141519
	I1009 19:04:55.603244  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:04:55.621701  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts --> /etc/test/nested/copy/141519/hosts (40 bytes)
	I1009 19:04:55.640532  161014 start.go:297] duration metric: took 172.708538ms for postStartSetup
	I1009 19:04:55.640625  161014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:04:55.640672  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.658424  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.758913  161014 command_runner.go:130] > 38%
	I1009 19:04:55.759004  161014 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:04:55.763762  161014 command_runner.go:130] > 182G
	I1009 19:04:55.763807  161014 fix.go:57] duration metric: took 1.710584464s for fixHost
	I1009 19:04:55.763821  161014 start.go:84] releasing machines lock for "functional-158523", held for 1.710622732s
	I1009 19:04:55.763882  161014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 19:04:55.781557  161014 ssh_runner.go:195] Run: cat /version.json
	I1009 19:04:55.781620  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.781568  161014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:04:55.781740  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.800026  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.800289  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.899840  161014 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1009 19:04:55.953125  161014 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 19:04:55.955421  161014 ssh_runner.go:195] Run: systemctl --version
	I1009 19:04:55.962169  161014 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1009 19:04:55.962207  161014 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1009 19:04:55.962422  161014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:04:56.001789  161014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 19:04:56.006364  161014 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 19:04:56.006710  161014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:04:56.006818  161014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:04:56.015207  161014 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:04:56.015234  161014 start.go:496] detecting cgroup driver to use...
	I1009 19:04:56.015270  161014 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:04:56.015326  161014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:04:56.030444  161014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:04:56.043355  161014 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:04:56.043439  161014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:04:56.058903  161014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:04:56.072794  161014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:04:56.155598  161014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:04:56.243484  161014 docker.go:234] disabling docker service ...
	I1009 19:04:56.243560  161014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:04:56.258472  161014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:04:56.271168  161014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:04:56.357916  161014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:04:56.444044  161014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:04:56.457436  161014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:04:56.471973  161014 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1009 19:04:56.472020  161014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:04:56.472074  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.481231  161014 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:04:56.481304  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.490735  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.499743  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.508857  161014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:04:56.517176  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.525878  161014 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.534146  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.542852  161014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:04:56.549944  161014 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 19:04:56.550015  161014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:04:56.557444  161014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:04:56.640120  161014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:04:56.755858  161014 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:04:56.755937  161014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:04:56.760115  161014 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1009 19:04:56.760139  161014 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 19:04:56.760145  161014 command_runner.go:130] > Device: 0,59	Inode: 3908        Links: 1
	I1009 19:04:56.760152  161014 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 19:04:56.760157  161014 command_runner.go:130] > Access: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760162  161014 command_runner.go:130] > Modify: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760167  161014 command_runner.go:130] > Change: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760171  161014 command_runner.go:130] >  Birth: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760191  161014 start.go:564] Will wait 60s for crictl version
	I1009 19:04:56.760238  161014 ssh_runner.go:195] Run: which crictl
	I1009 19:04:56.764068  161014 command_runner.go:130] > /usr/local/bin/crictl
	I1009 19:04:56.764145  161014 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:04:56.790045  161014 command_runner.go:130] > Version:  0.1.0
	I1009 19:04:56.790068  161014 command_runner.go:130] > RuntimeName:  cri-o
	I1009 19:04:56.790072  161014 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1009 19:04:56.790077  161014 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 19:04:56.790095  161014 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:04:56.790164  161014 ssh_runner.go:195] Run: crio --version
	I1009 19:04:56.817435  161014 command_runner.go:130] > crio version 1.34.1
	I1009 19:04:56.817460  161014 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 19:04:56.817466  161014 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 19:04:56.817470  161014 command_runner.go:130] >    GitTreeState:   dirty
	I1009 19:04:56.817475  161014 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 19:04:56.817480  161014 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 19:04:56.817483  161014 command_runner.go:130] >    Compiler:       gc
	I1009 19:04:56.817488  161014 command_runner.go:130] >    Platform:       linux/amd64
	I1009 19:04:56.817492  161014 command_runner.go:130] >    Linkmode:       static
	I1009 19:04:56.817496  161014 command_runner.go:130] >    BuildTags:
	I1009 19:04:56.817499  161014 command_runner.go:130] >      static
	I1009 19:04:56.817503  161014 command_runner.go:130] >      netgo
	I1009 19:04:56.817506  161014 command_runner.go:130] >      osusergo
	I1009 19:04:56.817510  161014 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 19:04:56.817514  161014 command_runner.go:130] >      seccomp
	I1009 19:04:56.817518  161014 command_runner.go:130] >      apparmor
	I1009 19:04:56.817521  161014 command_runner.go:130] >      selinux
	I1009 19:04:56.817525  161014 command_runner.go:130] >    LDFlags:          unknown
	I1009 19:04:56.817531  161014 command_runner.go:130] >    SeccompEnabled:   true
	I1009 19:04:56.817535  161014 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 19:04:56.819047  161014 ssh_runner.go:195] Run: crio --version
	I1009 19:04:56.846110  161014 command_runner.go:130] > crio version 1.34.1
	I1009 19:04:56.846137  161014 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 19:04:56.846145  161014 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 19:04:56.846154  161014 command_runner.go:130] >    GitTreeState:   dirty
	I1009 19:04:56.846160  161014 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 19:04:56.846166  161014 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 19:04:56.846172  161014 command_runner.go:130] >    Compiler:       gc
	I1009 19:04:56.846179  161014 command_runner.go:130] >    Platform:       linux/amd64
	I1009 19:04:56.846185  161014 command_runner.go:130] >    Linkmode:       static
	I1009 19:04:56.846193  161014 command_runner.go:130] >    BuildTags:
	I1009 19:04:56.846202  161014 command_runner.go:130] >      static
	I1009 19:04:56.846209  161014 command_runner.go:130] >      netgo
	I1009 19:04:56.846218  161014 command_runner.go:130] >      osusergo
	I1009 19:04:56.846226  161014 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 19:04:56.846238  161014 command_runner.go:130] >      seccomp
	I1009 19:04:56.846246  161014 command_runner.go:130] >      apparmor
	I1009 19:04:56.846252  161014 command_runner.go:130] >      selinux
	I1009 19:04:56.846262  161014 command_runner.go:130] >    LDFlags:          unknown
	I1009 19:04:56.846270  161014 command_runner.go:130] >    SeccompEnabled:   true
	I1009 19:04:56.846280  161014 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 19:04:56.849910  161014 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:04:56.851471  161014 cli_runner.go:164] Run: docker network inspect functional-158523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:04:56.867982  161014 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:04:56.872517  161014 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1009 19:04:56.872627  161014 kubeadm.go:883] updating cluster {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:04:56.872731  161014 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:04:56.872790  161014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:04:56.904568  161014 command_runner.go:130] > {
	I1009 19:04:56.904591  161014 command_runner.go:130] >   "images":  [
	I1009 19:04:56.904595  161014 command_runner.go:130] >     {
	I1009 19:04:56.904603  161014 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 19:04:56.904608  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904617  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 19:04:56.904622  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904628  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.904652  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 19:04:56.904667  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 19:04:56.904673  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904681  161014 command_runner.go:130] >       "size":  "109379124",
	I1009 19:04:56.904688  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.904694  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.904700  161014 command_runner.go:130] >     },
	I1009 19:04:56.904706  161014 command_runner.go:130] >     {
	I1009 19:04:56.904719  161014 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 19:04:56.904728  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904736  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 19:04:56.904744  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904754  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.904771  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 19:04:56.904786  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 19:04:56.904794  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904799  161014 command_runner.go:130] >       "size":  "31470524",
	I1009 19:04:56.904805  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.904814  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.904822  161014 command_runner.go:130] >     },
	I1009 19:04:56.904831  161014 command_runner.go:130] >     {
	I1009 19:04:56.904841  161014 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 19:04:56.904851  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904861  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 19:04:56.904870  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904879  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.904890  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 19:04:56.904903  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 19:04:56.904912  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904919  161014 command_runner.go:130] >       "size":  "76103547",
	I1009 19:04:56.904928  161014 command_runner.go:130] >       "username":  "nonroot",
	I1009 19:04:56.904938  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.904946  161014 command_runner.go:130] >     },
	I1009 19:04:56.904951  161014 command_runner.go:130] >     {
	I1009 19:04:56.904963  161014 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 19:04:56.904972  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904982  161014 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 19:04:56.904988  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904994  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905015  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 19:04:56.905029  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 19:04:56.905038  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905048  161014 command_runner.go:130] >       "size":  "195976448",
	I1009 19:04:56.905056  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905062  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905071  161014 command_runner.go:130] >       },
	I1009 19:04:56.905082  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905092  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905096  161014 command_runner.go:130] >     },
	I1009 19:04:56.905099  161014 command_runner.go:130] >     {
	I1009 19:04:56.905111  161014 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 19:04:56.905120  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905128  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 19:04:56.905137  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905147  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905160  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 19:04:56.905174  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 19:04:56.905182  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905188  161014 command_runner.go:130] >       "size":  "89046001",
	I1009 19:04:56.905195  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905199  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905207  161014 command_runner.go:130] >       },
	I1009 19:04:56.905218  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905228  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905235  161014 command_runner.go:130] >     },
	I1009 19:04:56.905240  161014 command_runner.go:130] >     {
	I1009 19:04:56.905253  161014 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 19:04:56.905262  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905273  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 19:04:56.905280  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905284  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905299  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 19:04:56.905315  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 19:04:56.905324  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905333  161014 command_runner.go:130] >       "size":  "76004181",
	I1009 19:04:56.905342  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905352  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905360  161014 command_runner.go:130] >       },
	I1009 19:04:56.905367  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905393  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905402  161014 command_runner.go:130] >     },
	I1009 19:04:56.905407  161014 command_runner.go:130] >     {
	I1009 19:04:56.905417  161014 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 19:04:56.905427  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905438  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 19:04:56.905446  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905456  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905470  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 19:04:56.905482  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 19:04:56.905490  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905500  161014 command_runner.go:130] >       "size":  "73138073",
	I1009 19:04:56.905510  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905516  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905525  161014 command_runner.go:130] >     },
	I1009 19:04:56.905533  161014 command_runner.go:130] >     {
	I1009 19:04:56.905543  161014 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 19:04:56.905552  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905563  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 19:04:56.905571  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905579  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905590  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 19:04:56.905613  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 19:04:56.905622  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905629  161014 command_runner.go:130] >       "size":  "53844823",
	I1009 19:04:56.905637  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905647  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905655  161014 command_runner.go:130] >       },
	I1009 19:04:56.905664  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905673  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905681  161014 command_runner.go:130] >     },
	I1009 19:04:56.905690  161014 command_runner.go:130] >     {
	I1009 19:04:56.905696  161014 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 19:04:56.905705  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905712  161014 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 19:04:56.905721  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905727  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905740  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 19:04:56.905754  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 19:04:56.905762  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905772  161014 command_runner.go:130] >       "size":  "742092",
	I1009 19:04:56.905783  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905791  161014 command_runner.go:130] >         "value":  "65535"
	I1009 19:04:56.905795  161014 command_runner.go:130] >       },
	I1009 19:04:56.905802  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905808  161014 command_runner.go:130] >       "pinned":  true
	I1009 19:04:56.905816  161014 command_runner.go:130] >     }
	I1009 19:04:56.905822  161014 command_runner.go:130] >   ]
	I1009 19:04:56.905830  161014 command_runner.go:130] > }
	I1009 19:04:56.906014  161014 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:04:56.906027  161014 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:04:56.906079  161014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:04:56.933720  161014 command_runner.go:130] > {
	I1009 19:04:56.933747  161014 command_runner.go:130] >   "images":  [
	I1009 19:04:56.933753  161014 command_runner.go:130] >     {
	I1009 19:04:56.933769  161014 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 19:04:56.933774  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.933781  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 19:04:56.933788  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933794  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.933805  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 19:04:56.933821  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 19:04:56.933827  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933835  161014 command_runner.go:130] >       "size":  "109379124",
	I1009 19:04:56.933845  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.933855  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.933861  161014 command_runner.go:130] >     },
	I1009 19:04:56.933864  161014 command_runner.go:130] >     {
	I1009 19:04:56.933873  161014 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 19:04:56.933879  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.933890  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 19:04:56.933899  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933906  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.933921  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 19:04:56.933935  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 19:04:56.933944  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933951  161014 command_runner.go:130] >       "size":  "31470524",
	I1009 19:04:56.933960  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.933970  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.933975  161014 command_runner.go:130] >     },
	I1009 19:04:56.933979  161014 command_runner.go:130] >     {
	I1009 19:04:56.933992  161014 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 19:04:56.934002  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934016  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 19:04:56.934029  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934036  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934050  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 19:04:56.934065  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 19:04:56.934072  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934079  161014 command_runner.go:130] >       "size":  "76103547",
	I1009 19:04:56.934086  161014 command_runner.go:130] >       "username":  "nonroot",
	I1009 19:04:56.934090  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934097  161014 command_runner.go:130] >     },
	I1009 19:04:56.934102  161014 command_runner.go:130] >     {
	I1009 19:04:56.934116  161014 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 19:04:56.934126  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934137  161014 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 19:04:56.934145  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934151  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934164  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 19:04:56.934177  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 19:04:56.934183  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934188  161014 command_runner.go:130] >       "size":  "195976448",
	I1009 19:04:56.934197  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934207  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934216  161014 command_runner.go:130] >       },
	I1009 19:04:56.934263  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934275  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934279  161014 command_runner.go:130] >     },
	I1009 19:04:56.934283  161014 command_runner.go:130] >     {
	I1009 19:04:56.934296  161014 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 19:04:56.934306  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934315  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 19:04:56.934323  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934329  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934344  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 19:04:56.934358  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 19:04:56.934372  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934397  161014 command_runner.go:130] >       "size":  "89046001",
	I1009 19:04:56.934408  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934416  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934425  161014 command_runner.go:130] >       },
	I1009 19:04:56.934435  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934444  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934452  161014 command_runner.go:130] >     },
	I1009 19:04:56.934461  161014 command_runner.go:130] >     {
	I1009 19:04:56.934473  161014 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 19:04:56.934480  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934486  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 19:04:56.934493  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934499  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934514  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 19:04:56.934529  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 19:04:56.934538  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934545  161014 command_runner.go:130] >       "size":  "76004181",
	I1009 19:04:56.934554  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934560  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934566  161014 command_runner.go:130] >       },
	I1009 19:04:56.934572  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934578  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934581  161014 command_runner.go:130] >     },
	I1009 19:04:56.934584  161014 command_runner.go:130] >     {
	I1009 19:04:56.934592  161014 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 19:04:56.934597  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934605  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 19:04:56.934610  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934616  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934629  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 19:04:56.934643  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 19:04:56.934652  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934660  161014 command_runner.go:130] >       "size":  "73138073",
	I1009 19:04:56.934667  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934677  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934681  161014 command_runner.go:130] >     },
	I1009 19:04:56.934684  161014 command_runner.go:130] >     {
	I1009 19:04:56.934690  161014 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 19:04:56.934696  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934704  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 19:04:56.934709  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934716  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934726  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 19:04:56.934747  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 19:04:56.934753  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934772  161014 command_runner.go:130] >       "size":  "53844823",
	I1009 19:04:56.934779  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934786  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934795  161014 command_runner.go:130] >       },
	I1009 19:04:56.934801  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934811  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934816  161014 command_runner.go:130] >     },
	I1009 19:04:56.934824  161014 command_runner.go:130] >     {
	I1009 19:04:56.934834  161014 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 19:04:56.934843  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934850  161014 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 19:04:56.934858  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934862  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934871  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 19:04:56.934886  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 19:04:56.934895  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934902  161014 command_runner.go:130] >       "size":  "742092",
	I1009 19:04:56.934910  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934917  161014 command_runner.go:130] >         "value":  "65535"
	I1009 19:04:56.934926  161014 command_runner.go:130] >       },
	I1009 19:04:56.934934  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934943  161014 command_runner.go:130] >       "pinned":  true
	I1009 19:04:56.934947  161014 command_runner.go:130] >     }
	I1009 19:04:56.934950  161014 command_runner.go:130] >   ]
	I1009 19:04:56.934953  161014 command_runner.go:130] > }
	I1009 19:04:56.935095  161014 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:04:56.935110  161014 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:04:56.935118  161014 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 19:04:56.935242  161014 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-158523 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:04:56.935323  161014 ssh_runner.go:195] Run: crio config
	I1009 19:04:56.978304  161014 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1009 19:04:56.978336  161014 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1009 19:04:56.978345  161014 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1009 19:04:56.978350  161014 command_runner.go:130] > #
	I1009 19:04:56.978359  161014 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1009 19:04:56.978367  161014 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1009 19:04:56.978390  161014 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1009 19:04:56.978401  161014 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1009 19:04:56.978406  161014 command_runner.go:130] > # reload'.
	I1009 19:04:56.978415  161014 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1009 19:04:56.978436  161014 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1009 19:04:56.978448  161014 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1009 19:04:56.978458  161014 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1009 19:04:56.978464  161014 command_runner.go:130] > [crio]
	I1009 19:04:56.978476  161014 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1009 19:04:56.978484  161014 command_runner.go:130] > # containers images, in this directory.
	I1009 19:04:56.978495  161014 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1009 19:04:56.978505  161014 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1009 19:04:56.978514  161014 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1009 19:04:56.978523  161014 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1009 19:04:56.978532  161014 command_runner.go:130] > # imagestore = ""
	I1009 19:04:56.978541  161014 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1009 19:04:56.978554  161014 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1009 19:04:56.978561  161014 command_runner.go:130] > # storage_driver = "overlay"
	I1009 19:04:56.978571  161014 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1009 19:04:56.978581  161014 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1009 19:04:56.978591  161014 command_runner.go:130] > # storage_option = [
	I1009 19:04:56.978596  161014 command_runner.go:130] > # ]
	I1009 19:04:56.978605  161014 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1009 19:04:56.978616  161014 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1009 19:04:56.978623  161014 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1009 19:04:56.978631  161014 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1009 19:04:56.978640  161014 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1009 19:04:56.978647  161014 command_runner.go:130] > # always happen on a node reboot
	I1009 19:04:56.978654  161014 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1009 19:04:56.978669  161014 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1009 19:04:56.978682  161014 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1009 19:04:56.978689  161014 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1009 19:04:56.978695  161014 command_runner.go:130] > # version_file_persist = ""
	I1009 19:04:56.978714  161014 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1009 19:04:56.978728  161014 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1009 19:04:56.978737  161014 command_runner.go:130] > # internal_wipe = true
	I1009 19:04:56.978748  161014 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1009 19:04:56.978760  161014 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1009 19:04:56.978772  161014 command_runner.go:130] > # internal_repair = true
	I1009 19:04:56.978780  161014 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1009 19:04:56.978794  161014 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1009 19:04:56.978805  161014 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1009 19:04:56.978815  161014 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1009 19:04:56.978825  161014 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1009 19:04:56.978833  161014 command_runner.go:130] > [crio.api]
	I1009 19:04:56.978841  161014 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1009 19:04:56.978851  161014 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1009 19:04:56.978860  161014 command_runner.go:130] > # IP address on which the stream server will listen.
	I1009 19:04:56.978870  161014 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1009 19:04:56.978881  161014 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1009 19:04:56.978892  161014 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1009 19:04:56.978901  161014 command_runner.go:130] > # stream_port = "0"
	I1009 19:04:56.978910  161014 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1009 19:04:56.978920  161014 command_runner.go:130] > # stream_enable_tls = false
	I1009 19:04:56.978929  161014 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1009 19:04:56.978954  161014 command_runner.go:130] > # stream_idle_timeout = ""
	I1009 19:04:56.978969  161014 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1009 19:04:56.978978  161014 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1009 19:04:56.978985  161014 command_runner.go:130] > # stream_tls_cert = ""
	I1009 19:04:56.978999  161014 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1009 19:04:56.979007  161014 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1009 19:04:56.979013  161014 command_runner.go:130] > # stream_tls_key = ""
	I1009 19:04:56.979025  161014 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1009 19:04:56.979039  161014 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1009 19:04:56.979049  161014 command_runner.go:130] > # automatically pick up the changes.
	I1009 19:04:56.979058  161014 command_runner.go:130] > # stream_tls_ca = ""
	I1009 19:04:56.979084  161014 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 19:04:56.979098  161014 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1009 19:04:56.979110  161014 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 19:04:56.979117  161014 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1009 19:04:56.979127  161014 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1009 19:04:56.979134  161014 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1009 19:04:56.979139  161014 command_runner.go:130] > [crio.runtime]
	I1009 19:04:56.979146  161014 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1009 19:04:56.979155  161014 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1009 19:04:56.979163  161014 command_runner.go:130] > # "nofile=1024:2048"
	I1009 19:04:56.979177  161014 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1009 19:04:56.979187  161014 command_runner.go:130] > # default_ulimits = [
	I1009 19:04:56.979193  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979206  161014 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1009 19:04:56.979215  161014 command_runner.go:130] > # no_pivot = false
	I1009 19:04:56.979226  161014 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1009 19:04:56.979239  161014 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1009 19:04:56.979251  161014 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1009 19:04:56.979259  161014 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1009 19:04:56.979267  161014 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1009 19:04:56.979277  161014 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 19:04:56.979283  161014 command_runner.go:130] > # conmon = ""
	I1009 19:04:56.979290  161014 command_runner.go:130] > # Cgroup setting for conmon
	I1009 19:04:56.979301  161014 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1009 19:04:56.979311  161014 command_runner.go:130] > conmon_cgroup = "pod"
	I1009 19:04:56.979320  161014 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1009 19:04:56.979327  161014 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1009 19:04:56.979338  161014 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 19:04:56.979347  161014 command_runner.go:130] > # conmon_env = [
	I1009 19:04:56.979353  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979364  161014 command_runner.go:130] > # Additional environment variables to set for all the
	I1009 19:04:56.979392  161014 command_runner.go:130] > # containers. These are overridden if set in the
	I1009 19:04:56.979406  161014 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1009 19:04:56.979412  161014 command_runner.go:130] > # default_env = [
	I1009 19:04:56.979420  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979429  161014 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1009 19:04:56.979443  161014 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1009 19:04:56.979453  161014 command_runner.go:130] > # selinux = false
	I1009 19:04:56.979463  161014 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1009 19:04:56.979479  161014 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1009 19:04:56.979489  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979497  161014 command_runner.go:130] > # seccomp_profile = ""
	I1009 19:04:56.979509  161014 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1009 19:04:56.979522  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979529  161014 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1009 19:04:56.979542  161014 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1009 19:04:56.979555  161014 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1009 19:04:56.979564  161014 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1009 19:04:56.979574  161014 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1009 19:04:56.979585  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979593  161014 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1009 19:04:56.979605  161014 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1009 19:04:56.979615  161014 command_runner.go:130] > # the cgroup blockio controller.
	I1009 19:04:56.979622  161014 command_runner.go:130] > # blockio_config_file = ""
	I1009 19:04:56.979636  161014 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1009 19:04:56.979642  161014 command_runner.go:130] > # blockio parameters.
	I1009 19:04:56.979648  161014 command_runner.go:130] > # blockio_reload = false
	I1009 19:04:56.979658  161014 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1009 19:04:56.979664  161014 command_runner.go:130] > # irqbalance daemon.
	I1009 19:04:56.979672  161014 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1009 19:04:56.979681  161014 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1009 19:04:56.979690  161014 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1009 19:04:56.979700  161014 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1009 19:04:56.979710  161014 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1009 19:04:56.979724  161014 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1009 19:04:56.979731  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979741  161014 command_runner.go:130] > # rdt_config_file = ""
	I1009 19:04:56.979753  161014 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1009 19:04:56.979764  161014 command_runner.go:130] > # cgroup_manager = "systemd"
	I1009 19:04:56.979773  161014 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1009 19:04:56.979783  161014 command_runner.go:130] > # separate_pull_cgroup = ""
	I1009 19:04:56.979791  161014 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1009 19:04:56.979800  161014 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1009 19:04:56.979809  161014 command_runner.go:130] > # will be added.
	I1009 19:04:56.979817  161014 command_runner.go:130] > # default_capabilities = [
	I1009 19:04:56.979826  161014 command_runner.go:130] > # 	"CHOWN",
	I1009 19:04:56.979832  161014 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1009 19:04:56.979840  161014 command_runner.go:130] > # 	"FSETID",
	I1009 19:04:56.979846  161014 command_runner.go:130] > # 	"FOWNER",
	I1009 19:04:56.979855  161014 command_runner.go:130] > # 	"SETGID",
	I1009 19:04:56.979876  161014 command_runner.go:130] > # 	"SETUID",
	I1009 19:04:56.979885  161014 command_runner.go:130] > # 	"SETPCAP",
	I1009 19:04:56.979891  161014 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1009 19:04:56.979901  161014 command_runner.go:130] > # 	"KILL",
	I1009 19:04:56.979906  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979920  161014 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1009 19:04:56.979930  161014 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1009 19:04:56.979950  161014 command_runner.go:130] > # add_inheritable_capabilities = false
	I1009 19:04:56.979963  161014 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1009 19:04:56.979972  161014 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 19:04:56.979977  161014 command_runner.go:130] > default_sysctls = [
	I1009 19:04:56.979993  161014 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1009 19:04:56.979997  161014 command_runner.go:130] > ]
	I1009 19:04:56.980003  161014 command_runner.go:130] > # List of devices on the host that a
	I1009 19:04:56.980010  161014 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1009 19:04:56.980015  161014 command_runner.go:130] > # allowed_devices = [
	I1009 19:04:56.980019  161014 command_runner.go:130] > # 	"/dev/fuse",
	I1009 19:04:56.980024  161014 command_runner.go:130] > # 	"/dev/net/tun",
	I1009 19:04:56.980029  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980035  161014 command_runner.go:130] > # List of additional devices. specified as
	I1009 19:04:56.980047  161014 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1009 19:04:56.980055  161014 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1009 19:04:56.980063  161014 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 19:04:56.980069  161014 command_runner.go:130] > # additional_devices = [
	I1009 19:04:56.980072  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980079  161014 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1009 19:04:56.980084  161014 command_runner.go:130] > # cdi_spec_dirs = [
	I1009 19:04:56.980091  161014 command_runner.go:130] > # 	"/etc/cdi",
	I1009 19:04:56.980097  161014 command_runner.go:130] > # 	"/var/run/cdi",
	I1009 19:04:56.980101  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980111  161014 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1009 19:04:56.980120  161014 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1009 19:04:56.980126  161014 command_runner.go:130] > # Defaults to false.
	I1009 19:04:56.980133  161014 command_runner.go:130] > # device_ownership_from_security_context = false
	I1009 19:04:56.980146  161014 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1009 19:04:56.980157  161014 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1009 19:04:56.980163  161014 command_runner.go:130] > # hooks_dir = [
	I1009 19:04:56.980167  161014 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1009 19:04:56.980173  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980179  161014 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1009 19:04:56.980187  161014 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1009 19:04:56.980192  161014 command_runner.go:130] > # its default mounts from the following two files:
	I1009 19:04:56.980197  161014 command_runner.go:130] > #
	I1009 19:04:56.980202  161014 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1009 19:04:56.980211  161014 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1009 19:04:56.980218  161014 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1009 19:04:56.980221  161014 command_runner.go:130] > #
	I1009 19:04:56.980230  161014 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1009 19:04:56.980236  161014 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1009 19:04:56.980244  161014 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1009 19:04:56.980252  161014 command_runner.go:130] > #      only add mounts it finds in this file.
	I1009 19:04:56.980255  161014 command_runner.go:130] > #
	I1009 19:04:56.980261  161014 command_runner.go:130] > # default_mounts_file = ""
	I1009 19:04:56.980266  161014 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1009 19:04:56.980275  161014 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1009 19:04:56.980281  161014 command_runner.go:130] > # pids_limit = -1
	I1009 19:04:56.980286  161014 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1009 19:04:56.980294  161014 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1009 19:04:56.980300  161014 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1009 19:04:56.980309  161014 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1009 19:04:56.980315  161014 command_runner.go:130] > # log_size_max = -1
	I1009 19:04:56.980322  161014 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1009 19:04:56.980328  161014 command_runner.go:130] > # log_to_journald = false
	I1009 19:04:56.980335  161014 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1009 19:04:56.980341  161014 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1009 19:04:56.980345  161014 command_runner.go:130] > # Path to directory for container attach sockets.
	I1009 19:04:56.980352  161014 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1009 19:04:56.980357  161014 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1009 19:04:56.980365  161014 command_runner.go:130] > # bind_mount_prefix = ""
	I1009 19:04:56.980370  161014 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1009 19:04:56.980376  161014 command_runner.go:130] > # read_only = false
	I1009 19:04:56.980395  161014 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1009 19:04:56.980405  161014 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1009 19:04:56.980413  161014 command_runner.go:130] > # live configuration reload.
	I1009 19:04:56.980417  161014 command_runner.go:130] > # log_level = "info"
	I1009 19:04:56.980425  161014 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1009 19:04:56.980430  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.980435  161014 command_runner.go:130] > # log_filter = ""
	I1009 19:04:56.980441  161014 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1009 19:04:56.980449  161014 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1009 19:04:56.980455  161014 command_runner.go:130] > # separated by comma.
	I1009 19:04:56.980462  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980467  161014 command_runner.go:130] > # uid_mappings = ""
	I1009 19:04:56.980473  161014 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1009 19:04:56.980480  161014 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1009 19:04:56.980486  161014 command_runner.go:130] > # separated by comma.
	I1009 19:04:56.980496  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980502  161014 command_runner.go:130] > # gid_mappings = ""
	I1009 19:04:56.980508  161014 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1009 19:04:56.980516  161014 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 19:04:56.980524  161014 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 19:04:56.980534  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980540  161014 command_runner.go:130] > # minimum_mappable_uid = -1
	I1009 19:04:56.980547  161014 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1009 19:04:56.980556  161014 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 19:04:56.980562  161014 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 19:04:56.980569  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980575  161014 command_runner.go:130] > # minimum_mappable_gid = -1
	I1009 19:04:56.980581  161014 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1009 19:04:56.980588  161014 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1009 19:04:56.980593  161014 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1009 19:04:56.980599  161014 command_runner.go:130] > # ctr_stop_timeout = 30
	I1009 19:04:56.980605  161014 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1009 19:04:56.980612  161014 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1009 19:04:56.980616  161014 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1009 19:04:56.980623  161014 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1009 19:04:56.980627  161014 command_runner.go:130] > # drop_infra_ctr = true
	I1009 19:04:56.980635  161014 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1009 19:04:56.980640  161014 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1009 19:04:56.980649  161014 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1009 19:04:56.980657  161014 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1009 19:04:56.980666  161014 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1009 19:04:56.980674  161014 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1009 19:04:56.980682  161014 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1009 19:04:56.980687  161014 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1009 19:04:56.980695  161014 command_runner.go:130] > # shared_cpuset = ""
	I1009 19:04:56.980703  161014 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1009 19:04:56.980707  161014 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1009 19:04:56.980712  161014 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1009 19:04:56.980719  161014 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1009 19:04:56.980725  161014 command_runner.go:130] > # pinns_path = ""
	I1009 19:04:56.980730  161014 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1009 19:04:56.980738  161014 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1009 19:04:56.980742  161014 command_runner.go:130] > # enable_criu_support = true
	I1009 19:04:56.980749  161014 command_runner.go:130] > # Enable/disable the generation of the container,
	I1009 19:04:56.980754  161014 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1009 19:04:56.980761  161014 command_runner.go:130] > # enable_pod_events = false
	I1009 19:04:56.980767  161014 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 19:04:56.980775  161014 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1009 19:04:56.980779  161014 command_runner.go:130] > # default_runtime = "crun"
	I1009 19:04:56.980785  161014 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1009 19:04:56.980792  161014 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1009 19:04:56.980803  161014 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1009 19:04:56.980809  161014 command_runner.go:130] > # creation as a file is not desired either.
	I1009 19:04:56.980817  161014 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1009 19:04:56.980823  161014 command_runner.go:130] > # the hostname is being managed dynamically.
	I1009 19:04:56.980828  161014 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1009 19:04:56.980831  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980836  161014 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1009 19:04:56.980844  161014 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1009 19:04:56.980850  161014 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1009 19:04:56.980858  161014 command_runner.go:130] > # Each entry in the table should follow the format:
	I1009 19:04:56.980861  161014 command_runner.go:130] > #
	I1009 19:04:56.980865  161014 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1009 19:04:56.980872  161014 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1009 19:04:56.980875  161014 command_runner.go:130] > # runtime_type = "oci"
	I1009 19:04:56.980882  161014 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1009 19:04:56.980887  161014 command_runner.go:130] > # inherit_default_runtime = false
	I1009 19:04:56.980894  161014 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1009 19:04:56.980898  161014 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1009 19:04:56.980902  161014 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1009 19:04:56.980906  161014 command_runner.go:130] > # monitor_env = []
	I1009 19:04:56.980910  161014 command_runner.go:130] > # privileged_without_host_devices = false
	I1009 19:04:56.980917  161014 command_runner.go:130] > # allowed_annotations = []
	I1009 19:04:56.980922  161014 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1009 19:04:56.980928  161014 command_runner.go:130] > # no_sync_log = false
	I1009 19:04:56.980932  161014 command_runner.go:130] > # default_annotations = {}
	I1009 19:04:56.980939  161014 command_runner.go:130] > # stream_websockets = false
	I1009 19:04:56.980949  161014 command_runner.go:130] > # seccomp_profile = ""
	I1009 19:04:56.980985  161014 command_runner.go:130] > # Where:
	I1009 19:04:56.980994  161014 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1009 19:04:56.980999  161014 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1009 19:04:56.981005  161014 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1009 19:04:56.981010  161014 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1009 19:04:56.981014  161014 command_runner.go:130] > #   in $PATH.
	I1009 19:04:56.981020  161014 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1009 19:04:56.981024  161014 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1009 19:04:56.981032  161014 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1009 19:04:56.981035  161014 command_runner.go:130] > #   state.
	I1009 19:04:56.981041  161014 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1009 19:04:56.981049  161014 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1009 19:04:56.981054  161014 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1009 19:04:56.981063  161014 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1009 19:04:56.981067  161014 command_runner.go:130] > #   the values from the default runtime on load time.
	I1009 19:04:56.981078  161014 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1009 19:04:56.981086  161014 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1009 19:04:56.981092  161014 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1009 19:04:56.981100  161014 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1009 19:04:56.981105  161014 command_runner.go:130] > #   The currently recognized values are:
	I1009 19:04:56.981113  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1009 19:04:56.981123  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1009 19:04:56.981130  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1009 19:04:56.981135  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1009 19:04:56.981144  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1009 19:04:56.981153  161014 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1009 19:04:56.981161  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1009 19:04:56.981169  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1009 19:04:56.981177  161014 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1009 19:04:56.981183  161014 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1009 19:04:56.981191  161014 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1009 19:04:56.981199  161014 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1009 19:04:56.981204  161014 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1009 19:04:56.981213  161014 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1009 19:04:56.981221  161014 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1009 19:04:56.981227  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1009 19:04:56.981235  161014 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1009 19:04:56.981239  161014 command_runner.go:130] > #   deprecated option "conmon".
	I1009 19:04:56.981248  161014 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1009 19:04:56.981255  161014 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1009 19:04:56.981261  161014 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1009 19:04:56.981268  161014 command_runner.go:130] > #   should be moved to the container's cgroup
	I1009 19:04:56.981273  161014 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1009 19:04:56.981280  161014 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1009 19:04:56.981287  161014 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1009 19:04:56.981293  161014 command_runner.go:130] > #   conmon-rs by using:
	I1009 19:04:56.981300  161014 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1009 19:04:56.981309  161014 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1009 19:04:56.981318  161014 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1009 19:04:56.981326  161014 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1009 19:04:56.981334  161014 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1009 19:04:56.981341  161014 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1009 19:04:56.981351  161014 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1009 19:04:56.981359  161014 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1009 19:04:56.981370  161014 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1009 19:04:56.981395  161014 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1009 19:04:56.981405  161014 command_runner.go:130] > #   when a machine crash happens.
	I1009 19:04:56.981411  161014 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1009 19:04:56.981421  161014 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1009 19:04:56.981431  161014 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1009 19:04:56.981437  161014 command_runner.go:130] > #   seccomp profile for the runtime.
	I1009 19:04:56.981443  161014 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1009 19:04:56.981452  161014 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1009 19:04:56.981455  161014 command_runner.go:130] > #
	I1009 19:04:56.981460  161014 command_runner.go:130] > # Using the seccomp notifier feature:
	I1009 19:04:56.981465  161014 command_runner.go:130] > #
	I1009 19:04:56.981472  161014 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1009 19:04:56.981480  161014 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1009 19:04:56.981483  161014 command_runner.go:130] > #
	I1009 19:04:56.981490  161014 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1009 19:04:56.981498  161014 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1009 19:04:56.981501  161014 command_runner.go:130] > #
	I1009 19:04:56.981507  161014 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1009 19:04:56.981512  161014 command_runner.go:130] > # feature.
	I1009 19:04:56.981515  161014 command_runner.go:130] > #
	I1009 19:04:56.981537  161014 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1009 19:04:56.981545  161014 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1009 19:04:56.981553  161014 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1009 19:04:56.981562  161014 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1009 19:04:56.981568  161014 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1009 19:04:56.981573  161014 command_runner.go:130] > #
	I1009 19:04:56.981579  161014 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1009 19:04:56.981587  161014 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1009 19:04:56.981590  161014 command_runner.go:130] > #
	I1009 19:04:56.981598  161014 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1009 19:04:56.981603  161014 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1009 19:04:56.981608  161014 command_runner.go:130] > #
	I1009 19:04:56.981614  161014 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1009 19:04:56.981622  161014 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1009 19:04:56.981628  161014 command_runner.go:130] > # limitation.
	I1009 19:04:56.981632  161014 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1009 19:04:56.981639  161014 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1009 19:04:56.981642  161014 command_runner.go:130] > runtime_type = ""
	I1009 19:04:56.981648  161014 command_runner.go:130] > runtime_root = "/run/crun"
	I1009 19:04:56.981652  161014 command_runner.go:130] > inherit_default_runtime = false
	I1009 19:04:56.981657  161014 command_runner.go:130] > runtime_config_path = ""
	I1009 19:04:56.981663  161014 command_runner.go:130] > container_min_memory = ""
	I1009 19:04:56.981667  161014 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 19:04:56.981673  161014 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 19:04:56.981677  161014 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 19:04:56.981683  161014 command_runner.go:130] > allowed_annotations = [
	I1009 19:04:56.981687  161014 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1009 19:04:56.981694  161014 command_runner.go:130] > ]
	I1009 19:04:56.981699  161014 command_runner.go:130] > privileged_without_host_devices = false
	I1009 19:04:56.981705  161014 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1009 19:04:56.981709  161014 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1009 19:04:56.981715  161014 command_runner.go:130] > runtime_type = ""
	I1009 19:04:56.981719  161014 command_runner.go:130] > runtime_root = "/run/runc"
	I1009 19:04:56.981725  161014 command_runner.go:130] > inherit_default_runtime = false
	I1009 19:04:56.981729  161014 command_runner.go:130] > runtime_config_path = ""
	I1009 19:04:56.981735  161014 command_runner.go:130] > container_min_memory = ""
	I1009 19:04:56.981739  161014 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 19:04:56.981744  161014 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 19:04:56.981750  161014 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 19:04:56.981754  161014 command_runner.go:130] > privileged_without_host_devices = false
	I1009 19:04:56.981761  161014 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1009 19:04:56.981769  161014 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1009 19:04:56.981774  161014 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1009 19:04:56.981783  161014 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1009 19:04:56.981795  161014 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1009 19:04:56.981807  161014 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1009 19:04:56.981815  161014 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1009 19:04:56.981823  161014 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1009 19:04:56.981831  161014 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1009 19:04:56.981840  161014 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1009 19:04:56.981848  161014 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1009 19:04:56.981854  161014 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1009 19:04:56.981859  161014 command_runner.go:130] > # Example:
	I1009 19:04:56.981864  161014 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1009 19:04:56.981871  161014 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1009 19:04:56.981875  161014 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1009 19:04:56.981884  161014 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1009 19:04:56.981899  161014 command_runner.go:130] > # cpuset = "0-1"
	I1009 19:04:56.981905  161014 command_runner.go:130] > # cpushares = "5"
	I1009 19:04:56.981909  161014 command_runner.go:130] > # cpuquota = "1000"
	I1009 19:04:56.981912  161014 command_runner.go:130] > # cpuperiod = "100000"
	I1009 19:04:56.981920  161014 command_runner.go:130] > # cpulimit = "35"
	I1009 19:04:56.981926  161014 command_runner.go:130] > # Where:
	I1009 19:04:56.981936  161014 command_runner.go:130] > # The workload name is workload-type.
	I1009 19:04:56.981948  161014 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1009 19:04:56.981955  161014 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1009 19:04:56.981962  161014 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1009 19:04:56.981971  161014 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1009 19:04:56.981979  161014 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1009 19:04:56.981984  161014 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1009 19:04:56.981993  161014 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1009 19:04:56.981997  161014 command_runner.go:130] > # Default value is set to true
	I1009 19:04:56.982003  161014 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1009 19:04:56.982009  161014 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1009 19:04:56.982013  161014 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1009 19:04:56.982017  161014 command_runner.go:130] > # Default value is set to 'false'
	I1009 19:04:56.982020  161014 command_runner.go:130] > # disable_hostport_mapping = false
	I1009 19:04:56.982025  161014 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1009 19:04:56.982034  161014 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1009 19:04:56.982039  161014 command_runner.go:130] > # timezone = ""
	I1009 19:04:56.982045  161014 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1009 19:04:56.982050  161014 command_runner.go:130] > #
	I1009 19:04:56.982056  161014 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1009 19:04:56.982064  161014 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1009 19:04:56.982067  161014 command_runner.go:130] > [crio.image]
	I1009 19:04:56.982072  161014 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1009 19:04:56.982080  161014 command_runner.go:130] > # default_transport = "docker://"
	I1009 19:04:56.982085  161014 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1009 19:04:56.982093  161014 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1009 19:04:56.982100  161014 command_runner.go:130] > # global_auth_file = ""
	I1009 19:04:56.982105  161014 command_runner.go:130] > # The image used to instantiate infra containers.
	I1009 19:04:56.982112  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.982116  161014 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1009 19:04:56.982124  161014 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1009 19:04:56.982132  161014 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1009 19:04:56.982137  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.982143  161014 command_runner.go:130] > # pause_image_auth_file = ""
	I1009 19:04:56.982148  161014 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1009 19:04:56.982156  161014 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1009 19:04:56.982162  161014 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1009 19:04:56.982170  161014 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1009 19:04:56.982173  161014 command_runner.go:130] > # pause_command = "/pause"
	I1009 19:04:56.982178  161014 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1009 19:04:56.982186  161014 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1009 19:04:56.982191  161014 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1009 19:04:56.982199  161014 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1009 19:04:56.982204  161014 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1009 19:04:56.982213  161014 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1009 19:04:56.982219  161014 command_runner.go:130] > # pinned_images = [
	I1009 19:04:56.982222  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982227  161014 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1009 19:04:56.982235  161014 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1009 19:04:56.982241  161014 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1009 19:04:56.982248  161014 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1009 19:04:56.982253  161014 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1009 19:04:56.982260  161014 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1009 19:04:56.982265  161014 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1009 19:04:56.982274  161014 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1009 19:04:56.982282  161014 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1009 19:04:56.982287  161014 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1009 19:04:56.982295  161014 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1009 19:04:56.982302  161014 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1009 19:04:56.982307  161014 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1009 19:04:56.982316  161014 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1009 19:04:56.982322  161014 command_runner.go:130] > # changing them here.
	I1009 19:04:56.982327  161014 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1009 19:04:56.982333  161014 command_runner.go:130] > # insecure_registries = [
	I1009 19:04:56.982336  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982342  161014 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1009 19:04:56.982352  161014 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1009 19:04:56.982359  161014 command_runner.go:130] > # image_volumes = "mkdir"
	I1009 19:04:56.982364  161014 command_runner.go:130] > # Temporary directory to use for storing big files
	I1009 19:04:56.982370  161014 command_runner.go:130] > # big_files_temporary_dir = ""
	I1009 19:04:56.982385  161014 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1009 19:04:56.982398  161014 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1009 19:04:56.982403  161014 command_runner.go:130] > # auto_reload_registries = false
	I1009 19:04:56.982412  161014 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1009 19:04:56.982419  161014 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1009 19:04:56.982427  161014 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1009 19:04:56.982431  161014 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1009 19:04:56.982435  161014 command_runner.go:130] > # The mode of short name resolution.
	I1009 19:04:56.982441  161014 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1009 19:04:56.982450  161014 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1009 19:04:56.982455  161014 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1009 19:04:56.982460  161014 command_runner.go:130] > # short_name_mode = "enforcing"
	I1009 19:04:56.982465  161014 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1009 19:04:56.982472  161014 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1009 19:04:56.982476  161014 command_runner.go:130] > # oci_artifact_mount_support = true
	I1009 19:04:56.982484  161014 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1009 19:04:56.982487  161014 command_runner.go:130] > # CNI plugins.
	I1009 19:04:56.982490  161014 command_runner.go:130] > [crio.network]
	I1009 19:04:56.982496  161014 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1009 19:04:56.982501  161014 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1009 19:04:56.982507  161014 command_runner.go:130] > # cni_default_network = ""
	I1009 19:04:56.982512  161014 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1009 19:04:56.982519  161014 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1009 19:04:56.982524  161014 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1009 19:04:56.982530  161014 command_runner.go:130] > # plugin_dirs = [
	I1009 19:04:56.982533  161014 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1009 19:04:56.982536  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982540  161014 command_runner.go:130] > # List of included pod metrics.
	I1009 19:04:56.982544  161014 command_runner.go:130] > # included_pod_metrics = [
	I1009 19:04:56.982547  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982552  161014 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1009 19:04:56.982558  161014 command_runner.go:130] > [crio.metrics]
	I1009 19:04:56.982562  161014 command_runner.go:130] > # Globally enable or disable metrics support.
	I1009 19:04:56.982566  161014 command_runner.go:130] > # enable_metrics = false
	I1009 19:04:56.982570  161014 command_runner.go:130] > # Specify enabled metrics collectors.
	I1009 19:04:56.982574  161014 command_runner.go:130] > # Per default all metrics are enabled.
	I1009 19:04:56.982579  161014 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1009 19:04:56.982588  161014 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1009 19:04:56.982593  161014 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1009 19:04:56.982598  161014 command_runner.go:130] > # metrics_collectors = [
	I1009 19:04:56.982602  161014 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1009 19:04:56.982607  161014 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1009 19:04:56.982610  161014 command_runner.go:130] > # 	"containers_oom_total",
	I1009 19:04:56.982614  161014 command_runner.go:130] > # 	"processes_defunct",
	I1009 19:04:56.982617  161014 command_runner.go:130] > # 	"operations_total",
	I1009 19:04:56.982621  161014 command_runner.go:130] > # 	"operations_latency_seconds",
	I1009 19:04:56.982625  161014 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1009 19:04:56.982629  161014 command_runner.go:130] > # 	"operations_errors_total",
	I1009 19:04:56.982632  161014 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1009 19:04:56.982636  161014 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1009 19:04:56.982640  161014 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1009 19:04:56.982643  161014 command_runner.go:130] > # 	"image_pulls_success_total",
	I1009 19:04:56.982648  161014 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1009 19:04:56.982652  161014 command_runner.go:130] > # 	"containers_oom_count_total",
	I1009 19:04:56.982656  161014 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1009 19:04:56.982660  161014 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1009 19:04:56.982664  161014 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1009 19:04:56.982667  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982672  161014 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1009 19:04:56.982675  161014 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1009 19:04:56.982680  161014 command_runner.go:130] > # The port on which the metrics server will listen.
	I1009 19:04:56.982683  161014 command_runner.go:130] > # metrics_port = 9090
	I1009 19:04:56.982689  161014 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1009 19:04:56.982693  161014 command_runner.go:130] > # metrics_socket = ""
	I1009 19:04:56.982698  161014 command_runner.go:130] > # The certificate for the secure metrics server.
	I1009 19:04:56.982706  161014 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1009 19:04:56.982712  161014 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1009 19:04:56.982718  161014 command_runner.go:130] > # certificate on any modification event.
	I1009 19:04:56.982722  161014 command_runner.go:130] > # metrics_cert = ""
	I1009 19:04:56.982735  161014 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1009 19:04:56.982741  161014 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1009 19:04:56.982746  161014 command_runner.go:130] > # metrics_key = ""
	I1009 19:04:56.982753  161014 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1009 19:04:56.982758  161014 command_runner.go:130] > [crio.tracing]
	I1009 19:04:56.982766  161014 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1009 19:04:56.982771  161014 command_runner.go:130] > # enable_tracing = false
	I1009 19:04:56.982779  161014 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1009 19:04:56.982788  161014 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1009 19:04:56.982798  161014 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1009 19:04:56.982809  161014 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1009 19:04:56.982818  161014 command_runner.go:130] > # CRI-O NRI configuration.
	I1009 19:04:56.982821  161014 command_runner.go:130] > [crio.nri]
	I1009 19:04:56.982825  161014 command_runner.go:130] > # Globally enable or disable NRI.
	I1009 19:04:56.982832  161014 command_runner.go:130] > # enable_nri = true
	I1009 19:04:56.982836  161014 command_runner.go:130] > # NRI socket to listen on.
	I1009 19:04:56.982842  161014 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1009 19:04:56.982846  161014 command_runner.go:130] > # NRI plugin directory to use.
	I1009 19:04:56.982851  161014 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1009 19:04:56.982856  161014 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1009 19:04:56.982863  161014 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1009 19:04:56.982868  161014 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1009 19:04:56.982900  161014 command_runner.go:130] > # nri_disable_connections = false
	I1009 19:04:56.982908  161014 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1009 19:04:56.982912  161014 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1009 19:04:56.982916  161014 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1009 19:04:56.982920  161014 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1009 19:04:56.982926  161014 command_runner.go:130] > # NRI default validator configuration.
	I1009 19:04:56.982933  161014 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1009 19:04:56.982946  161014 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1009 19:04:56.982953  161014 command_runner.go:130] > # can be restricted/rejected:
	I1009 19:04:56.982956  161014 command_runner.go:130] > # - OCI hook injection
	I1009 19:04:56.982961  161014 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1009 19:04:56.982969  161014 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1009 19:04:56.982974  161014 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1009 19:04:56.982982  161014 command_runner.go:130] > # - adjustment of linux namespaces
	I1009 19:04:56.982988  161014 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1009 19:04:56.982996  161014 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1009 19:04:56.983002  161014 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1009 19:04:56.983007  161014 command_runner.go:130] > #
	I1009 19:04:56.983011  161014 command_runner.go:130] > # [crio.nri.default_validator]
	I1009 19:04:56.983015  161014 command_runner.go:130] > # nri_enable_default_validator = false
	I1009 19:04:56.983020  161014 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1009 19:04:56.983027  161014 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1009 19:04:56.983032  161014 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1009 19:04:56.983039  161014 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1009 19:04:56.983044  161014 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1009 19:04:56.983050  161014 command_runner.go:130] > # nri_validator_required_plugins = [
	I1009 19:04:56.983053  161014 command_runner.go:130] > # ]
	I1009 19:04:56.983058  161014 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1009 19:04:56.983066  161014 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1009 19:04:56.983069  161014 command_runner.go:130] > [crio.stats]
	I1009 19:04:56.983074  161014 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1009 19:04:56.983087  161014 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1009 19:04:56.983092  161014 command_runner.go:130] > # stats_collection_period = 0
	I1009 19:04:56.983097  161014 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1009 19:04:56.983106  161014 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1009 19:04:56.983109  161014 command_runner.go:130] > # collection_period = 0
	I1009 19:04:56.983133  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961902946Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1009 19:04:56.983143  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961928249Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1009 19:04:56.983151  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961952575Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1009 19:04:56.983160  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961969788Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1009 19:04:56.983168  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.962036562Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.983178  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.96221376Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1009 19:04:56.983187  161014 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1009 19:04:56.983250  161014 cni.go:84] Creating CNI manager for ""
	I1009 19:04:56.983259  161014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:04:56.983280  161014 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:04:56.983306  161014 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-158523 NodeName:functional-158523 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:04:56.983442  161014 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-158523"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:04:56.983504  161014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:04:56.992256  161014 command_runner.go:130] > kubeadm
	I1009 19:04:56.992278  161014 command_runner.go:130] > kubectl
	I1009 19:04:56.992282  161014 command_runner.go:130] > kubelet
	I1009 19:04:56.992304  161014 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:04:56.992347  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:04:57.000522  161014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 19:04:57.013113  161014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:04:57.026211  161014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1009 19:04:57.038776  161014 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:04:57.042573  161014 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1009 19:04:57.042649  161014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:04:57.130268  161014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:04:57.143785  161014 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523 for IP: 192.168.49.2
	I1009 19:04:57.143808  161014 certs.go:195] generating shared ca certs ...
	I1009 19:04:57.143829  161014 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.144031  161014 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:04:57.144072  161014 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:04:57.144082  161014 certs.go:257] generating profile certs ...
	I1009 19:04:57.144182  161014 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key
	I1009 19:04:57.144224  161014 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key.1809350a
	I1009 19:04:57.144260  161014 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key
	I1009 19:04:57.144272  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:04:57.144283  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:04:57.144293  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:04:57.144302  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:04:57.144314  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:04:57.144325  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:04:57.144336  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:04:57.144348  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:04:57.144426  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:04:57.144461  161014 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:04:57.144470  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:04:57.144493  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:04:57.144516  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:04:57.144537  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:04:57.144579  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:04:57.144605  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.144619  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.144631  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.145144  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:04:57.163977  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:04:57.182180  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:04:57.200741  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:04:57.219086  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:04:57.236775  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:04:57.254529  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:04:57.272276  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:04:57.290804  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:04:57.309893  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:04:57.327963  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:04:57.345810  161014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:04:57.359185  161014 ssh_runner.go:195] Run: openssl version
	I1009 19:04:57.366137  161014 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1009 19:04:57.366338  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:04:57.375985  161014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.380041  161014 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.380082  161014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.380117  161014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.415315  161014 command_runner.go:130] > b5213941
	I1009 19:04:57.415413  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:04:57.424315  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:04:57.433300  161014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.437553  161014 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.437594  161014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.437635  161014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.472859  161014 command_runner.go:130] > 51391683
	I1009 19:04:57.473177  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:04:57.481800  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:04:57.490997  161014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.494992  161014 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.495040  161014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.495095  161014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.529155  161014 command_runner.go:130] > 3ec20f2e
	I1009 19:04:57.529240  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:04:57.537710  161014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:04:57.541624  161014 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:04:57.541645  161014 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1009 19:04:57.541653  161014 command_runner.go:130] > Device: 8,1	Inode: 573939      Links: 1
	I1009 19:04:57.541662  161014 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 19:04:57.541679  161014 command_runner.go:130] > Access: 2025-10-09 19:00:49.271404553 +0000
	I1009 19:04:57.541690  161014 command_runner.go:130] > Modify: 2025-10-09 18:56:44.405307509 +0000
	I1009 19:04:57.541704  161014 command_runner.go:130] > Change: 2025-10-09 18:56:44.405307509 +0000
	I1009 19:04:57.541714  161014 command_runner.go:130] >  Birth: 2025-10-09 18:56:44.405307509 +0000
	I1009 19:04:57.541773  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:04:57.576034  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.576418  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:04:57.610746  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.611106  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:04:57.645558  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.645650  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:04:57.680926  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.681269  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:04:57.716681  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.716965  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:04:57.752444  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.752733  161014 kubeadm.go:400] StartCluster: {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:04:57.752827  161014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:04:57.752877  161014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:04:57.781930  161014 cri.go:89] found id: ""
	I1009 19:04:57.782002  161014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:04:57.790396  161014 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1009 19:04:57.790421  161014 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1009 19:04:57.790427  161014 command_runner.go:130] > /var/lib/minikube/etcd:
	I1009 19:04:57.790446  161014 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:04:57.790453  161014 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:04:57.790499  161014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:04:57.798150  161014 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:04:57.798252  161014 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-158523" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.798307  161014 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-137890/kubeconfig needs updating (will repair): [kubeconfig missing "functional-158523" cluster setting kubeconfig missing "functional-158523" context setting]
	I1009 19:04:57.798648  161014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.799428  161014 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.799625  161014 kapi.go:59] client config for functional-158523: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:04:57.800169  161014 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:04:57.800185  161014 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:04:57.800191  161014 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:04:57.800195  161014 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:04:57.800199  161014 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:04:57.800257  161014 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:04:57.800663  161014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:04:57.808677  161014 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:04:57.808712  161014 kubeadm.go:601] duration metric: took 18.25382ms to restartPrimaryControlPlane
	I1009 19:04:57.808720  161014 kubeadm.go:402] duration metric: took 56.001565ms to StartCluster
	I1009 19:04:57.808736  161014 settings.go:142] acquiring lock: {Name:mk34466033fb866bc8e167d2d953624ad0802283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.808837  161014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.809418  161014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.809652  161014 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:04:57.809720  161014 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:04:57.809869  161014 addons.go:69] Setting storage-provisioner=true in profile "functional-158523"
	I1009 19:04:57.809882  161014 addons.go:69] Setting default-storageclass=true in profile "functional-158523"
	I1009 19:04:57.809890  161014 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:57.809907  161014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-158523"
	I1009 19:04:57.809888  161014 addons.go:238] Setting addon storage-provisioner=true in "functional-158523"
	I1009 19:04:57.809999  161014 host.go:66] Checking if "functional-158523" exists ...
	I1009 19:04:57.810265  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:57.810325  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:57.815899  161014 out.go:179] * Verifying Kubernetes components...
	I1009 19:04:57.817259  161014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:04:57.830319  161014 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.830565  161014 kapi.go:59] client config for functional-158523: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:04:57.830893  161014 addons.go:238] Setting addon default-storageclass=true in "functional-158523"
	I1009 19:04:57.830936  161014 host.go:66] Checking if "functional-158523" exists ...
	I1009 19:04:57.831444  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:57.831697  161014 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:04:57.833512  161014 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:57.833530  161014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:04:57.833580  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:57.856284  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:57.858504  161014 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:57.858545  161014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:04:57.858618  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:57.879618  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:57.916522  161014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:04:57.930660  161014 node_ready.go:35] waiting up to 6m0s for node "functional-158523" to be "Ready" ...
	I1009 19:04:57.930861  161014 type.go:168] "Request Body" body=""
	I1009 19:04:57.930944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:57.931232  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:57.969596  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:57.988544  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:58.026986  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.027037  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.027061  161014 retry.go:31] will retry after 164.488016ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.047051  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.047098  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.047116  161014 retry.go:31] will retry after 194.483244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.192480  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:58.242329  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:58.247629  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.247684  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.247711  161014 retry.go:31] will retry after 217.861079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.297775  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.297841  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.297866  161014 retry.go:31] will retry after 198.924996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.431075  161014 type.go:168] "Request Body" body=""
	I1009 19:04:58.431155  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:58.431537  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:58.466794  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:58.497509  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:58.521187  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.524476  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.524506  161014 retry.go:31] will retry after 579.961825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.549062  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.552103  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.552134  161014 retry.go:31] will retry after 574.521259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.930944  161014 type.go:168] "Request Body" body=""
	I1009 19:04:58.931032  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:58.931452  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:59.104703  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:59.127368  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:59.161080  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:59.161136  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.161156  161014 retry.go:31] will retry after 734.839127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.184025  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:59.184076  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.184098  161014 retry.go:31] will retry after 1.025268007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.431572  161014 type.go:168] "Request Body" body=""
	I1009 19:04:59.431684  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:59.432074  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:59.896539  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:59.931433  161014 type.go:168] "Request Body" body=""
	I1009 19:04:59.931506  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:59.931837  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:04:59.931910  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:04:59.949186  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:59.952452  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.952481  161014 retry.go:31] will retry after 1.084602838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:00.209882  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:00.262148  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:00.265292  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:00.265336  161014 retry.go:31] will retry after 1.287073207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:00.431721  161014 type.go:168] "Request Body" body=""
	I1009 19:05:00.431804  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:00.432145  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:00.931797  161014 type.go:168] "Request Body" body=""
	I1009 19:05:00.931880  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:00.932240  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:01.037525  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:01.094236  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:01.094283  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.094304  161014 retry.go:31] will retry after 1.546934371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.431777  161014 type.go:168] "Request Body" body=""
	I1009 19:05:01.431854  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:01.432251  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:01.553547  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:01.609996  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:01.610065  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.610089  161014 retry.go:31] will retry after 1.923829662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.931551  161014 type.go:168] "Request Body" body=""
	I1009 19:05:01.931629  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:01.931969  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:01.932040  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:02.431907  161014 type.go:168] "Request Body" body=""
	I1009 19:05:02.431987  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:02.432358  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:02.641614  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:02.696762  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:02.699844  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:02.699873  161014 retry.go:31] will retry after 2.36633365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:02.931226  161014 type.go:168] "Request Body" body=""
	I1009 19:05:02.931306  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:02.931737  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:03.431598  161014 type.go:168] "Request Body" body=""
	I1009 19:05:03.431682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:03.432054  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:03.534329  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:03.590565  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:03.590611  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:03.590631  161014 retry.go:31] will retry after 1.952860092s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:03.931329  161014 type.go:168] "Request Body" body=""
	I1009 19:05:03.931427  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:03.931807  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:04.431531  161014 type.go:168] "Request Body" body=""
	I1009 19:05:04.431620  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:04.432004  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:04.432087  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:04.931914  161014 type.go:168] "Request Body" body=""
	I1009 19:05:04.931993  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:04.932341  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:05.066624  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:05.119719  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:05.123044  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.123086  161014 retry.go:31] will retry after 6.108852521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.431602  161014 type.go:168] "Request Body" body=""
	I1009 19:05:05.431719  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:05.432158  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:05.544481  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:05.597312  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:05.600803  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.600837  161014 retry.go:31] will retry after 3.364758217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.931296  161014 type.go:168] "Request Body" body=""
	I1009 19:05:05.931418  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:05.931808  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:06.431397  161014 type.go:168] "Request Body" body=""
	I1009 19:05:06.431479  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:06.431873  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:06.931533  161014 type.go:168] "Request Body" body=""
	I1009 19:05:06.931626  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:06.932024  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:06.932104  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:07.431687  161014 type.go:168] "Request Body" body=""
	I1009 19:05:07.431779  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:07.432140  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:07.930948  161014 type.go:168] "Request Body" body=""
	I1009 19:05:07.931031  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:07.931436  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:08.431020  161014 type.go:168] "Request Body" body=""
	I1009 19:05:08.431105  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:08.431489  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:08.931423  161014 type.go:168] "Request Body" body=""
	I1009 19:05:08.931528  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:08.931995  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:08.966195  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:09.019582  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:09.022605  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:09.022645  161014 retry.go:31] will retry after 7.771885559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:09.431187  161014 type.go:168] "Request Body" body=""
	I1009 19:05:09.431265  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:09.431662  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:09.431745  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:09.931552  161014 type.go:168] "Request Body" body=""
	I1009 19:05:09.931635  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:09.931979  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:10.431855  161014 type.go:168] "Request Body" body=""
	I1009 19:05:10.431945  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:10.432274  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:10.931110  161014 type.go:168] "Request Body" body=""
	I1009 19:05:10.931203  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:10.931572  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:11.233030  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:11.288902  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:11.288953  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:11.288975  161014 retry.go:31] will retry after 3.345246752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:11.431308  161014 type.go:168] "Request Body" body=""
	I1009 19:05:11.431402  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:11.431749  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:11.431819  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:11.931642  161014 type.go:168] "Request Body" body=""
	I1009 19:05:11.931749  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:11.932113  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:12.430947  161014 type.go:168] "Request Body" body=""
	I1009 19:05:12.431034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:12.431445  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:12.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:05:12.931315  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:12.931711  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:13.431639  161014 type.go:168] "Request Body" body=""
	I1009 19:05:13.431724  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:13.432088  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:13.432151  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:13.930962  161014 type.go:168] "Request Body" body=""
	I1009 19:05:13.931048  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:13.931440  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:14.431210  161014 type.go:168] "Request Body" body=""
	I1009 19:05:14.431296  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:14.431700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:14.635101  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:14.689463  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:14.692943  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:14.692988  161014 retry.go:31] will retry after 8.426490786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:14.931454  161014 type.go:168] "Request Body" body=""
	I1009 19:05:14.931531  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:14.931912  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:15.431649  161014 type.go:168] "Request Body" body=""
	I1009 19:05:15.431767  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:15.432139  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:15.432244  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:15.931808  161014 type.go:168] "Request Body" body=""
	I1009 19:05:15.931885  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:15.932226  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:16.430935  161014 type.go:168] "Request Body" body=""
	I1009 19:05:16.431026  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:16.431417  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:16.794854  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:16.849041  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:16.852200  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:16.852234  161014 retry.go:31] will retry after 11.902123756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:16.931535  161014 type.go:168] "Request Body" body=""
	I1009 19:05:16.931634  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:16.931986  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:17.431870  161014 type.go:168] "Request Body" body=""
	I1009 19:05:17.431977  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:17.432410  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:17.432479  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:17.931194  161014 type.go:168] "Request Body" body=""
	I1009 19:05:17.931301  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:17.931659  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:18.431314  161014 type.go:168] "Request Body" body=""
	I1009 19:05:18.431420  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:18.431851  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:18.931802  161014 type.go:168] "Request Body" body=""
	I1009 19:05:18.931891  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:18.932247  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:19.431889  161014 type.go:168] "Request Body" body=""
	I1009 19:05:19.431978  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:19.432365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:19.930982  161014 type.go:168] "Request Body" body=""
	I1009 19:05:19.931070  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:19.931472  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:19.931543  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:20.431080  161014 type.go:168] "Request Body" body=""
	I1009 19:05:20.431159  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:20.431505  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:20.931084  161014 type.go:168] "Request Body" body=""
	I1009 19:05:20.931162  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:20.931465  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:21.431126  161014 type.go:168] "Request Body" body=""
	I1009 19:05:21.431210  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:21.431583  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:21.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:05:21.931310  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:21.931673  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:21.931757  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:22.431250  161014 type.go:168] "Request Body" body=""
	I1009 19:05:22.431335  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:22.431706  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:22.931281  161014 type.go:168] "Request Body" body=""
	I1009 19:05:22.931373  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:22.931764  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:23.120080  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:23.178288  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:23.178344  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:23.178369  161014 retry.go:31] will retry after 12.554942652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:23.431791  161014 type.go:168] "Request Body" body=""
	I1009 19:05:23.431875  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:23.432227  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:23.931667  161014 type.go:168] "Request Body" body=""
	I1009 19:05:23.931758  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:23.932103  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:23.932167  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:24.431140  161014 type.go:168] "Request Body" body=""
	I1009 19:05:24.431223  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:24.431607  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:24.931219  161014 type.go:168] "Request Body" body=""
	I1009 19:05:24.931297  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:24.931656  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:25.431282  161014 type.go:168] "Request Body" body=""
	I1009 19:05:25.431369  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:25.431754  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:25.931371  161014 type.go:168] "Request Body" body=""
	I1009 19:05:25.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:25.931852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:26.431721  161014 type.go:168] "Request Body" body=""
	I1009 19:05:26.431805  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:26.432173  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:26.432243  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:26.931895  161014 type.go:168] "Request Body" body=""
	I1009 19:05:26.931982  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:26.932327  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:27.430978  161014 type.go:168] "Request Body" body=""
	I1009 19:05:27.431069  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:27.431440  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:27.931122  161014 type.go:168] "Request Body" body=""
	I1009 19:05:27.931203  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:27.931568  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:28.431156  161014 type.go:168] "Request Body" body=""
	I1009 19:05:28.431240  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:28.431629  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:28.755128  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:28.809181  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:28.812331  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:28.812369  161014 retry.go:31] will retry after 17.899546939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:28.931943  161014 type.go:168] "Request Body" body=""
	I1009 19:05:28.932042  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:28.932423  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:28.932495  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:29.431031  161014 type.go:168] "Request Body" body=""
	I1009 19:05:29.431117  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:29.431488  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:29.931112  161014 type.go:168] "Request Body" body=""
	I1009 19:05:29.931191  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:29.931549  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:30.431108  161014 type.go:168] "Request Body" body=""
	I1009 19:05:30.431184  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:30.431580  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:30.931234  161014 type.go:168] "Request Body" body=""
	I1009 19:05:30.931310  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:30.931701  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:31.431358  161014 type.go:168] "Request Body" body=""
	I1009 19:05:31.431464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:31.431883  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:31.431968  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:31.931551  161014 type.go:168] "Request Body" body=""
	I1009 19:05:31.931654  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:31.932150  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:32.431843  161014 type.go:168] "Request Body" body=""
	I1009 19:05:32.431928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:32.432325  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:32.930923  161014 type.go:168] "Request Body" body=""
	I1009 19:05:32.931009  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:32.931419  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:33.431049  161014 type.go:168] "Request Body" body=""
	I1009 19:05:33.431139  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:33.431539  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:33.931442  161014 type.go:168] "Request Body" body=""
	I1009 19:05:33.931529  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:33.931921  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:33.931994  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:34.431615  161014 type.go:168] "Request Body" body=""
	I1009 19:05:34.431709  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:34.432084  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:34.931772  161014 type.go:168] "Request Body" body=""
	I1009 19:05:34.931853  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:34.932239  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:35.431990  161014 type.go:168] "Request Body" body=""
	I1009 19:05:35.432083  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:35.432473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:35.733912  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:35.787306  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:35.790843  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:35.790879  161014 retry.go:31] will retry after 31.721699669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:35.931334  161014 type.go:168] "Request Body" body=""
	I1009 19:05:35.931474  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:35.931860  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:36.431788  161014 type.go:168] "Request Body" body=""
	I1009 19:05:36.431878  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:36.432242  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:36.432309  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:36.931065  161014 type.go:168] "Request Body" body=""
	I1009 19:05:36.931156  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:36.931540  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:37.431314  161014 type.go:168] "Request Body" body=""
	I1009 19:05:37.431439  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:37.431797  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:37.931603  161014 type.go:168] "Request Body" body=""
	I1009 19:05:37.931697  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:37.932081  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:38.431682  161014 type.go:168] "Request Body" body=""
	I1009 19:05:38.431775  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:38.432127  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:38.930953  161014 type.go:168] "Request Body" body=""
	I1009 19:05:38.931049  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:38.931414  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:38.931498  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:39.430956  161014 type.go:168] "Request Body" body=""
	I1009 19:05:39.431070  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:39.431453  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:39.931034  161014 type.go:168] "Request Body" body=""
	I1009 19:05:39.931145  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:39.931490  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:40.431075  161014 type.go:168] "Request Body" body=""
	I1009 19:05:40.431166  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:40.431582  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:40.931249  161014 type.go:168] "Request Body" body=""
	I1009 19:05:40.931336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:40.931693  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:40.931756  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:41.431331  161014 type.go:168] "Request Body" body=""
	I1009 19:05:41.431437  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:41.431805  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:41.931445  161014 type.go:168] "Request Body" body=""
	I1009 19:05:41.931535  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:41.931928  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:42.431625  161014 type.go:168] "Request Body" body=""
	I1009 19:05:42.431705  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:42.432064  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:42.931717  161014 type.go:168] "Request Body" body=""
	I1009 19:05:42.931803  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:42.932175  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:42.932247  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:43.430857  161014 type.go:168] "Request Body" body=""
	I1009 19:05:43.430971  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:43.431317  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:43.931149  161014 type.go:168] "Request Body" body=""
	I1009 19:05:43.931232  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:43.931588  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:44.431181  161014 type.go:168] "Request Body" body=""
	I1009 19:05:44.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:44.431639  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:44.931222  161014 type.go:168] "Request Body" body=""
	I1009 19:05:44.931306  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:44.931692  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:45.431277  161014 type.go:168] "Request Body" body=""
	I1009 19:05:45.431360  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:45.431736  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:45.431802  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:45.931357  161014 type.go:168] "Request Body" body=""
	I1009 19:05:45.931462  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:45.931838  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:46.431506  161014 type.go:168] "Request Body" body=""
	I1009 19:05:46.431593  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:46.431956  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:46.712449  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:46.768626  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:46.768679  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:46.768704  161014 retry.go:31] will retry after 25.41172348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:46.930938  161014 type.go:168] "Request Body" body=""
	I1009 19:05:46.931055  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:46.931460  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:47.431076  161014 type.go:168] "Request Body" body=""
	I1009 19:05:47.431153  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:47.431556  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:47.931415  161014 type.go:168] "Request Body" body=""
	I1009 19:05:47.931510  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:47.931879  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:47.931959  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:48.431674  161014 type.go:168] "Request Body" body=""
	I1009 19:05:48.431759  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:48.432094  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:48.930896  161014 type.go:168] "Request Body" body=""
	I1009 19:05:48.931001  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:48.931373  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:49.430996  161014 type.go:168] "Request Body" body=""
	I1009 19:05:49.431076  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:49.431465  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:49.931262  161014 type.go:168] "Request Body" body=""
	I1009 19:05:49.931370  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:49.931789  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:50.431699  161014 type.go:168] "Request Body" body=""
	I1009 19:05:50.431782  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:50.432126  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:50.432204  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:50.930957  161014 type.go:168] "Request Body" body=""
	I1009 19:05:50.931084  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:50.931482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:51.431250  161014 type.go:168] "Request Body" body=""
	I1009 19:05:51.431347  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:51.431706  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:51.931601  161014 type.go:168] "Request Body" body=""
	I1009 19:05:51.931698  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:51.932063  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:52.430862  161014 type.go:168] "Request Body" body=""
	I1009 19:05:52.430953  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:52.431298  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:52.931085  161014 type.go:168] "Request Body" body=""
	I1009 19:05:52.931179  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:52.931557  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:52.931624  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:53.431339  161014 type.go:168] "Request Body" body=""
	I1009 19:05:53.431459  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:53.431829  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:53.931637  161014 type.go:168] "Request Body" body=""
	I1009 19:05:53.931736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:53.932120  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:54.430920  161014 type.go:168] "Request Body" body=""
	I1009 19:05:54.431014  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:54.431426  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:54.931205  161014 type.go:168] "Request Body" body=""
	I1009 19:05:54.931299  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:54.931695  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:54.931776  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:55.431596  161014 type.go:168] "Request Body" body=""
	I1009 19:05:55.431674  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:55.432023  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:55.931858  161014 type.go:168] "Request Body" body=""
	I1009 19:05:55.931949  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:55.932317  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:56.431017  161014 type.go:168] "Request Body" body=""
	I1009 19:05:56.431102  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:56.431477  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:56.931242  161014 type.go:168] "Request Body" body=""
	I1009 19:05:56.931328  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:56.931740  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:56.931822  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:57.431701  161014 type.go:168] "Request Body" body=""
	I1009 19:05:57.431787  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:57.432169  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:57.931004  161014 type.go:168] "Request Body" body=""
	I1009 19:05:57.931088  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:57.931492  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:58.430896  161014 type.go:168] "Request Body" body=""
	I1009 19:05:58.430977  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:58.431316  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:58.931136  161014 type.go:168] "Request Body" body=""
	I1009 19:05:58.931305  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:58.931701  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:59.431527  161014 type.go:168] "Request Body" body=""
	I1009 19:05:59.431619  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:59.431986  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:59.432056  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:59.931914  161014 type.go:168] "Request Body" body=""
	I1009 19:05:59.932022  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:59.932451  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:00.431235  161014 type.go:168] "Request Body" body=""
	I1009 19:06:00.431321  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:00.431700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:00.931491  161014 type.go:168] "Request Body" body=""
	I1009 19:06:00.931598  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:00.932038  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:01.430870  161014 type.go:168] "Request Body" body=""
	I1009 19:06:01.430962  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:01.431351  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:01.931160  161014 type.go:168] "Request Body" body=""
	I1009 19:06:01.931259  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:01.931701  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:01.931781  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:02.431642  161014 type.go:168] "Request Body" body=""
	I1009 19:06:02.431741  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:02.432105  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:02.930912  161014 type.go:168] "Request Body" body=""
	I1009 19:06:02.931026  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:02.931440  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:03.431233  161014 type.go:168] "Request Body" body=""
	I1009 19:06:03.431316  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:03.431698  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:03.931548  161014 type.go:168] "Request Body" body=""
	I1009 19:06:03.931627  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:03.932000  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:03.932085  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:04.431884  161014 type.go:168] "Request Body" body=""
	I1009 19:06:04.431963  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:04.432329  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:04.931167  161014 type.go:168] "Request Body" body=""
	I1009 19:06:04.931256  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:04.931675  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:05.431519  161014 type.go:168] "Request Body" body=""
	I1009 19:06:05.431593  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:05.431983  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:05.931927  161014 type.go:168] "Request Body" body=""
	I1009 19:06:05.932019  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:05.932421  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:05.932517  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:06.431278  161014 type.go:168] "Request Body" body=""
	I1009 19:06:06.431359  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:06.431798  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:06.931667  161014 type.go:168] "Request Body" body=""
	I1009 19:06:06.931753  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:06.932149  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:07.430942  161014 type.go:168] "Request Body" body=""
	I1009 19:06:07.431028  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:07.431419  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:07.513672  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:06:07.571073  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:07.571125  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:07.571145  161014 retry.go:31] will retry after 23.39838606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:07.931687  161014 type.go:168] "Request Body" body=""
	I1009 19:06:07.931784  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:07.932135  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:08.430924  161014 type.go:168] "Request Body" body=""
	I1009 19:06:08.431034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:08.431403  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:08.431469  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:08.931208  161014 type.go:168] "Request Body" body=""
	I1009 19:06:08.931288  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:08.931643  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:09.431520  161014 type.go:168] "Request Body" body=""
	I1009 19:06:09.431629  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:09.432018  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:09.931868  161014 type.go:168] "Request Body" body=""
	I1009 19:06:09.931945  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:09.932304  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:10.431151  161014 type.go:168] "Request Body" body=""
	I1009 19:06:10.431248  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:10.431669  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:10.431738  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:10.931500  161014 type.go:168] "Request Body" body=""
	I1009 19:06:10.931584  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:10.931948  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:11.431952  161014 type.go:168] "Request Body" body=""
	I1009 19:06:11.432052  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:11.432455  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:11.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:06:11.931310  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:11.931689  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:12.181131  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:06:12.238294  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:12.238358  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:12.238405  161014 retry.go:31] will retry after 21.481583015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:12.431681  161014 type.go:168] "Request Body" body=""
	I1009 19:06:12.431761  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:12.432057  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:12.432128  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:12.931845  161014 type.go:168] "Request Body" body=""
	I1009 19:06:12.931939  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:12.932415  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:13.431004  161014 type.go:168] "Request Body" body=""
	I1009 19:06:13.431117  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:13.431483  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:13.931308  161014 type.go:168] "Request Body" body=""
	I1009 19:06:13.931404  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:13.931807  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:14.431415  161014 type.go:168] "Request Body" body=""
	I1009 19:06:14.431502  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:14.431906  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:14.931635  161014 type.go:168] "Request Body" body=""
	I1009 19:06:14.931725  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:14.932138  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:14.932205  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:15.431840  161014 type.go:168] "Request Body" body=""
	I1009 19:06:15.431927  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:15.432292  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:15.930896  161014 type.go:168] "Request Body" body=""
	I1009 19:06:15.930996  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:15.931404  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:16.431000  161014 type.go:168] "Request Body" body=""
	I1009 19:06:16.431088  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:16.431482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:16.931089  161014 type.go:168] "Request Body" body=""
	I1009 19:06:16.931169  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:16.931606  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:17.431187  161014 type.go:168] "Request Body" body=""
	I1009 19:06:17.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:17.431645  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:17.431717  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:17.931505  161014 type.go:168] "Request Body" body=""
	I1009 19:06:17.931588  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:17.931977  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:18.431663  161014 type.go:168] "Request Body" body=""
	I1009 19:06:18.431753  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:18.432133  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:18.931039  161014 type.go:168] "Request Body" body=""
	I1009 19:06:18.931125  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:18.931496  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:19.431026  161014 type.go:168] "Request Body" body=""
	I1009 19:06:19.431101  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:19.431425  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:19.931079  161014 type.go:168] "Request Body" body=""
	I1009 19:06:19.931160  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:19.931533  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:19.931605  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:20.431142  161014 type.go:168] "Request Body" body=""
	I1009 19:06:20.431225  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:20.431606  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:20.931205  161014 type.go:168] "Request Body" body=""
	I1009 19:06:20.931288  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:20.931685  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:21.431270  161014 type.go:168] "Request Body" body=""
	I1009 19:06:21.431352  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:21.431727  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:21.931351  161014 type.go:168] "Request Body" body=""
	I1009 19:06:21.931459  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:21.931867  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:21.931960  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:22.431630  161014 type.go:168] "Request Body" body=""
	I1009 19:06:22.431720  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:22.432112  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:22.931909  161014 type.go:168] "Request Body" body=""
	I1009 19:06:22.932006  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:22.932466  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:23.431019  161014 type.go:168] "Request Body" body=""
	I1009 19:06:23.431108  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:23.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:23.931352  161014 type.go:168] "Request Body" body=""
	I1009 19:06:23.931464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:23.931866  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:24.430863  161014 type.go:168] "Request Body" body=""
	I1009 19:06:24.430951  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:24.431355  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:24.431478  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:24.930971  161014 type.go:168] "Request Body" body=""
	I1009 19:06:24.931061  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:24.931476  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:25.431052  161014 type.go:168] "Request Body" body=""
	I1009 19:06:25.431143  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:25.431497  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:25.931072  161014 type.go:168] "Request Body" body=""
	I1009 19:06:25.931164  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:25.931594  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:26.430916  161014 type.go:168] "Request Body" body=""
	I1009 19:06:26.431010  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:26.431407  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:26.931057  161014 type.go:168] "Request Body" body=""
	I1009 19:06:26.931135  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:26.931533  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:26.931610  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:27.431142  161014 type.go:168] "Request Body" body=""
	I1009 19:06:27.431220  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:27.431622  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:27.931665  161014 type.go:168] "Request Body" body=""
	I1009 19:06:27.931758  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:27.932163  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:28.431861  161014 type.go:168] "Request Body" body=""
	I1009 19:06:28.431949  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:28.432310  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:28.931285  161014 type.go:168] "Request Body" body=""
	I1009 19:06:28.931362  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:28.931821  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:28.931892  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:29.431462  161014 type.go:168] "Request Body" body=""
	I1009 19:06:29.431547  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:29.432004  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:29.931701  161014 type.go:168] "Request Body" body=""
	I1009 19:06:29.931782  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:29.932180  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:30.431935  161014 type.go:168] "Request Body" body=""
	I1009 19:06:30.432026  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:30.432405  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:30.931026  161014 type.go:168] "Request Body" body=""
	I1009 19:06:30.931109  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:30.931522  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:30.970755  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:06:31.028107  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:31.028174  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:31.028309  161014 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:06:31.431764  161014 type.go:168] "Request Body" body=""
	I1009 19:06:31.431853  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:31.432208  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:31.432284  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:31.930867  161014 type.go:168] "Request Body" body=""
	I1009 19:06:31.930984  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:31.931336  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:32.430958  161014 type.go:168] "Request Body" body=""
	I1009 19:06:32.431047  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:32.431465  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:32.931031  161014 type.go:168] "Request Body" body=""
	I1009 19:06:32.931127  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:32.931496  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:33.431116  161014 type.go:168] "Request Body" body=""
	I1009 19:06:33.431195  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:33.431601  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:33.721082  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:06:33.781514  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:33.781597  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:33.781723  161014 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:06:33.784570  161014 out.go:179] * Enabled addons: 
	I1009 19:06:33.786444  161014 addons.go:514] duration metric: took 1m35.976729521s for enable addons: enabled=[]
	I1009 19:06:33.931193  161014 type.go:168] "Request Body" body=""
	I1009 19:06:33.931298  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:33.931708  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:33.931785  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:34.431608  161014 type.go:168] "Request Body" body=""
	I1009 19:06:34.431688  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:34.432050  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:34.931894  161014 type.go:168] "Request Body" body=""
	I1009 19:06:34.931982  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:34.932369  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:35.431177  161014 type.go:168] "Request Body" body=""
	I1009 19:06:35.431261  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:35.431656  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:35.931508  161014 type.go:168] "Request Body" body=""
	I1009 19:06:35.931632  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:35.932017  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:35.932080  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:36.431933  161014 type.go:168] "Request Body" body=""
	I1009 19:06:36.432042  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:36.432446  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:36.931225  161014 type.go:168] "Request Body" body=""
	I1009 19:06:36.931328  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:36.931704  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:37.431625  161014 type.go:168] "Request Body" body=""
	I1009 19:06:37.431738  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:37.432141  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:37.930857  161014 type.go:168] "Request Body" body=""
	I1009 19:06:37.930995  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:37.931342  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:38.431133  161014 type.go:168] "Request Body" body=""
	I1009 19:06:38.431214  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:38.431597  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:38.431683  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:38.931462  161014 type.go:168] "Request Body" body=""
	I1009 19:06:38.931563  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:38.931971  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:39.431871  161014 type.go:168] "Request Body" body=""
	I1009 19:06:39.431963  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:39.432315  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:39.931128  161014 type.go:168] "Request Body" body=""
	I1009 19:06:39.931220  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:39.931618  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:40.431437  161014 type.go:168] "Request Body" body=""
	I1009 19:06:40.431514  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:40.431890  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:40.431961  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:40.931810  161014 type.go:168] "Request Body" body=""
	I1009 19:06:40.931912  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:40.932290  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:41.431100  161014 type.go:168] "Request Body" body=""
	I1009 19:06:41.431218  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:41.431599  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:41.931346  161014 type.go:168] "Request Body" body=""
	I1009 19:06:41.931468  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:41.931837  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:42.431757  161014 type.go:168] "Request Body" body=""
	I1009 19:06:42.431845  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:42.432237  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:42.432298  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:42.931019  161014 type.go:168] "Request Body" body=""
	I1009 19:06:42.931113  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:42.931521  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:43.431303  161014 type.go:168] "Request Body" body=""
	I1009 19:06:43.431415  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:43.431782  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:43.931780  161014 type.go:168] "Request Body" body=""
	I1009 19:06:43.931864  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:43.932272  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:44.431107  161014 type.go:168] "Request Body" body=""
	I1009 19:06:44.431212  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:44.431609  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:44.931522  161014 type.go:168] "Request Body" body=""
	I1009 19:06:44.931612  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:44.932005  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:44.932091  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:45.430863  161014 type.go:168] "Request Body" body=""
	I1009 19:06:45.430955  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:45.431333  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:45.931195  161014 type.go:168] "Request Body" body=""
	I1009 19:06:45.931296  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:45.931727  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:46.431598  161014 type.go:168] "Request Body" body=""
	I1009 19:06:46.431682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:46.432089  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:46.930909  161014 type.go:168] "Request Body" body=""
	I1009 19:06:46.931014  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:46.931410  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:47.431166  161014 type.go:168] "Request Body" body=""
	I1009 19:06:47.431244  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:47.431610  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:47.431679  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:47.931409  161014 type.go:168] "Request Body" body=""
	I1009 19:06:47.931495  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:47.931852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:48.431707  161014 type.go:168] "Request Body" body=""
	I1009 19:06:48.431803  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:48.432224  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:48.931113  161014 type.go:168] "Request Body" body=""
	I1009 19:06:48.931196  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:48.931590  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:49.431438  161014 type.go:168] "Request Body" body=""
	I1009 19:06:49.431532  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:49.431933  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:49.432014  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:49.931847  161014 type.go:168] "Request Body" body=""
	I1009 19:06:49.931955  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:49.932365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:50.431151  161014 type.go:168] "Request Body" body=""
	I1009 19:06:50.431252  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:50.431731  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:50.931584  161014 type.go:168] "Request Body" body=""
	I1009 19:06:50.931668  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:50.932034  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:51.431892  161014 type.go:168] "Request Body" body=""
	I1009 19:06:51.432001  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:51.432357  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:51.432451  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:51.931169  161014 type.go:168] "Request Body" body=""
	I1009 19:06:51.931251  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:51.931649  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:52.431585  161014 type.go:168] "Request Body" body=""
	I1009 19:06:52.431683  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:52.432058  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:52.931903  161014 type.go:168] "Request Body" body=""
	I1009 19:06:52.931994  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:52.932365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:53.431140  161014 type.go:168] "Request Body" body=""
	I1009 19:06:53.431240  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:53.431607  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:53.931515  161014 type.go:168] "Request Body" body=""
	I1009 19:06:53.931602  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:53.931970  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:53.932045  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:54.431874  161014 type.go:168] "Request Body" body=""
	I1009 19:06:54.431956  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:54.432333  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:54.931110  161014 type.go:168] "Request Body" body=""
	I1009 19:06:54.931191  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:54.931572  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:55.431313  161014 type.go:168] "Request Body" body=""
	I1009 19:06:55.431422  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:55.431759  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:55.931628  161014 type.go:168] "Request Body" body=""
	I1009 19:06:55.931708  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:55.932052  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:55.932122  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:56.430861  161014 type.go:168] "Request Body" body=""
	I1009 19:06:56.430953  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:56.431299  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:56.931073  161014 type.go:168] "Request Body" body=""
	I1009 19:06:56.931162  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:56.931537  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:57.431318  161014 type.go:168] "Request Body" body=""
	I1009 19:06:57.431417  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:57.431759  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:57.931618  161014 type.go:168] "Request Body" body=""
	I1009 19:06:57.931839  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:57.932218  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:57.932279  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:58.431144  161014 type.go:168] "Request Body" body=""
	I1009 19:06:58.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:58.431970  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:58.931861  161014 type.go:168] "Request Body" body=""
	I1009 19:06:58.931940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:58.932311  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:59.431143  161014 type.go:168] "Request Body" body=""
	I1009 19:06:59.431223  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:59.431592  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:59.930909  161014 type.go:168] "Request Body" body=""
	I1009 19:06:59.931020  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:59.931371  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:00.430999  161014 type.go:168] "Request Body" body=""
	I1009 19:07:00.431081  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:00.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:00.431566  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:00.931093  161014 type.go:168] "Request Body" body=""
	I1009 19:07:00.931180  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:00.931581  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:01.431360  161014 type.go:168] "Request Body" body=""
	I1009 19:07:01.431464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:01.431832  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:01.931692  161014 type.go:168] "Request Body" body=""
	I1009 19:07:01.931784  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:01.932184  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:02.430934  161014 type.go:168] "Request Body" body=""
	I1009 19:07:02.431018  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:02.431378  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:02.931191  161014 type.go:168] "Request Body" body=""
	I1009 19:07:02.931275  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:02.931689  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:02.931756  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:03.431523  161014 type.go:168] "Request Body" body=""
	I1009 19:07:03.431604  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:03.431991  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:03.930871  161014 type.go:168] "Request Body" body=""
	I1009 19:07:03.930969  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:03.931407  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:04.431200  161014 type.go:168] "Request Body" body=""
	I1009 19:07:04.431281  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:04.431686  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:04.931603  161014 type.go:168] "Request Body" body=""
	I1009 19:07:04.931684  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:04.932085  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:04.932154  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:05.430888  161014 type.go:168] "Request Body" body=""
	I1009 19:07:05.430980  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:05.431365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:05.931176  161014 type.go:168] "Request Body" body=""
	I1009 19:07:05.931266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:05.931718  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:06.431607  161014 type.go:168] "Request Body" body=""
	I1009 19:07:06.431688  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:06.432075  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:06.930900  161014 type.go:168] "Request Body" body=""
	I1009 19:07:06.931004  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:06.931420  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:07.431211  161014 type.go:168] "Request Body" body=""
	I1009 19:07:07.431297  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:07.431674  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:07.431738  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:07.931521  161014 type.go:168] "Request Body" body=""
	I1009 19:07:07.931600  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:07.931988  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:08.431938  161014 type.go:168] "Request Body" body=""
	I1009 19:07:08.432023  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:08.432368  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:08.931198  161014 type.go:168] "Request Body" body=""
	I1009 19:07:08.931276  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:08.931670  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:09.431634  161014 type.go:168] "Request Body" body=""
	I1009 19:07:09.431764  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:09.432193  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:09.432271  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:09.931021  161014 type.go:168] "Request Body" body=""
	I1009 19:07:09.931112  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:09.931511  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:10.431319  161014 type.go:168] "Request Body" body=""
	I1009 19:07:10.431421  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:10.431776  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:10.931586  161014 type.go:168] "Request Body" body=""
	I1009 19:07:10.931675  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:10.932028  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:11.431928  161014 type.go:168] "Request Body" body=""
	I1009 19:07:11.432018  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:11.432409  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:11.432493  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:11.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:07:11.931314  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:11.931691  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:12.431493  161014 type.go:168] "Request Body" body=""
	I1009 19:07:12.431576  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:12.431970  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:12.931830  161014 type.go:168] "Request Body" body=""
	I1009 19:07:12.931910  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:12.932268  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:13.431040  161014 type.go:168] "Request Body" body=""
	I1009 19:07:13.431128  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:13.431515  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:13.931313  161014 type.go:168] "Request Body" body=""
	I1009 19:07:13.931411  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:13.931829  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:13.931895  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:14.431732  161014 type.go:168] "Request Body" body=""
	I1009 19:07:14.431830  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:14.432198  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:14.931016  161014 type.go:168] "Request Body" body=""
	I1009 19:07:14.931107  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:14.931472  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:15.431233  161014 type.go:168] "Request Body" body=""
	I1009 19:07:15.431326  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:15.431754  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:15.931605  161014 type.go:168] "Request Body" body=""
	I1009 19:07:15.931691  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:15.932042  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:15.932112  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:16.430847  161014 type.go:168] "Request Body" body=""
	I1009 19:07:16.430926  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:16.431288  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:16.931059  161014 type.go:168] "Request Body" body=""
	I1009 19:07:16.931135  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:16.931483  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:17.431236  161014 type.go:168] "Request Body" body=""
	I1009 19:07:17.431328  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:17.431725  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:17.931584  161014 type.go:168] "Request Body" body=""
	I1009 19:07:17.931680  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:17.932068  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:17.932144  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:18.430878  161014 type.go:168] "Request Body" body=""
	I1009 19:07:18.430959  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:18.431336  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:18.931220  161014 type.go:168] "Request Body" body=""
	I1009 19:07:18.931308  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:18.931716  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:19.431622  161014 type.go:168] "Request Body" body=""
	I1009 19:07:19.431711  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:19.432084  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:19.930887  161014 type.go:168] "Request Body" body=""
	I1009 19:07:19.930970  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:19.931335  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:20.431128  161014 type.go:168] "Request Body" body=""
	I1009 19:07:20.431228  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:20.431607  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:20.431677  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:20.931571  161014 type.go:168] "Request Body" body=""
	I1009 19:07:20.931652  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:20.932025  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:21.431914  161014 type.go:168] "Request Body" body=""
	I1009 19:07:21.432004  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:21.432437  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:21.931260  161014 type.go:168] "Request Body" body=""
	I1009 19:07:21.931349  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:21.931776  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:22.431637  161014 type.go:168] "Request Body" body=""
	I1009 19:07:22.431729  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:22.432091  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:22.432158  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:22.930926  161014 type.go:168] "Request Body" body=""
	I1009 19:07:22.931021  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:22.931412  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:23.431182  161014 type.go:168] "Request Body" body=""
	I1009 19:07:23.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:23.431631  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:23.931458  161014 type.go:168] "Request Body" body=""
	I1009 19:07:23.931550  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:23.931920  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:24.431853  161014 type.go:168] "Request Body" body=""
	I1009 19:07:24.431948  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:24.432326  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:24.432422  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:24.931143  161014 type.go:168] "Request Body" body=""
	I1009 19:07:24.931223  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:24.931599  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:25.431358  161014 type.go:168] "Request Body" body=""
	I1009 19:07:25.431464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:25.431821  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:25.931703  161014 type.go:168] "Request Body" body=""
	I1009 19:07:25.931787  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:25.932180  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:26.430976  161014 type.go:168] "Request Body" body=""
	I1009 19:07:26.431075  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:26.431458  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:26.931245  161014 type.go:168] "Request Body" body=""
	I1009 19:07:26.931331  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:26.931713  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:26.931784  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:27.431576  161014 type.go:168] "Request Body" body=""
	I1009 19:07:27.431668  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:27.432031  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:27.931772  161014 type.go:168] "Request Body" body=""
	I1009 19:07:27.931862  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:27.932254  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:28.431022  161014 type.go:168] "Request Body" body=""
	I1009 19:07:28.431102  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:28.431482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:28.931348  161014 type.go:168] "Request Body" body=""
	I1009 19:07:28.931459  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:28.931844  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:28.931911  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:29.431781  161014 type.go:168] "Request Body" body=""
	I1009 19:07:29.431865  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:29.432226  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:29.931019  161014 type.go:168] "Request Body" body=""
	I1009 19:07:29.931099  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:29.931495  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:30.431235  161014 type.go:168] "Request Body" body=""
	I1009 19:07:30.431318  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:30.431699  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:30.931618  161014 type.go:168] "Request Body" body=""
	I1009 19:07:30.931726  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:30.932096  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:30.932155  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:31.430950  161014 type.go:168] "Request Body" body=""
	I1009 19:07:31.431039  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:31.431429  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:31.931226  161014 type.go:168] "Request Body" body=""
	I1009 19:07:31.931324  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:31.931743  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:32.431688  161014 type.go:168] "Request Body" body=""
	I1009 19:07:32.431781  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:32.432184  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:32.930987  161014 type.go:168] "Request Body" body=""
	I1009 19:07:32.931070  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:32.931473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:33.431239  161014 type.go:168] "Request Body" body=""
	I1009 19:07:33.431321  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:33.431722  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:33.431792  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:33.931516  161014 type.go:168] "Request Body" body=""
	I1009 19:07:33.931606  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:33.931963  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:34.431848  161014 type.go:168] "Request Body" body=""
	I1009 19:07:34.431929  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:34.432337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:34.931149  161014 type.go:168] "Request Body" body=""
	I1009 19:07:34.931233  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:34.931610  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:35.431433  161014 type.go:168] "Request Body" body=""
	I1009 19:07:35.431519  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:35.431884  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:35.431951  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:35.931757  161014 type.go:168] "Request Body" body=""
	I1009 19:07:35.931834  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:35.932194  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:36.431002  161014 type.go:168] "Request Body" body=""
	I1009 19:07:36.431092  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:36.431521  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:36.931304  161014 type.go:168] "Request Body" body=""
	I1009 19:07:36.931415  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:36.931771  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:37.431635  161014 type.go:168] "Request Body" body=""
	I1009 19:07:37.431735  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:37.432135  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:37.432203  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:37.931637  161014 type.go:168] "Request Body" body=""
	I1009 19:07:37.931755  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:37.932124  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:38.430922  161014 type.go:168] "Request Body" body=""
	I1009 19:07:38.431020  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:38.431405  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:38.931217  161014 type.go:168] "Request Body" body=""
	I1009 19:07:38.931295  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:38.931651  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:39.431495  161014 type.go:168] "Request Body" body=""
	I1009 19:07:39.431575  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:39.431936  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:39.931858  161014 type.go:168] "Request Body" body=""
	I1009 19:07:39.931940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:39.932326  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:39.932421  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:40.431161  161014 type.go:168] "Request Body" body=""
	I1009 19:07:40.431255  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:40.431615  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:40.931366  161014 type.go:168] "Request Body" body=""
	I1009 19:07:40.931491  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:40.931869  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:41.431767  161014 type.go:168] "Request Body" body=""
	I1009 19:07:41.431861  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:41.432350  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:41.931160  161014 type.go:168] "Request Body" body=""
	I1009 19:07:41.931250  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:41.931735  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:42.431633  161014 type.go:168] "Request Body" body=""
	I1009 19:07:42.431732  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:42.432111  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:42.432176  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:42.930929  161014 type.go:168] "Request Body" body=""
	I1009 19:07:42.931031  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:42.931442  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:43.431234  161014 type.go:168] "Request Body" body=""
	I1009 19:07:43.431336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:43.431722  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:43.931601  161014 type.go:168] "Request Body" body=""
	I1009 19:07:43.931683  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:43.932053  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:44.430865  161014 type.go:168] "Request Body" body=""
	I1009 19:07:44.430947  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:44.431356  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:44.931167  161014 type.go:168] "Request Body" body=""
	I1009 19:07:44.931250  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:44.931627  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:44.931696  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:45.431431  161014 type.go:168] "Request Body" body=""
	I1009 19:07:45.431510  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:45.431847  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:45.931770  161014 type.go:168] "Request Body" body=""
	I1009 19:07:45.931853  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:45.932210  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:46.430939  161014 type.go:168] "Request Body" body=""
	I1009 19:07:46.431018  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:46.431347  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:46.931133  161014 type.go:168] "Request Body" body=""
	I1009 19:07:46.931213  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:46.931599  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:47.431337  161014 type.go:168] "Request Body" body=""
	I1009 19:07:47.431449  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:47.431806  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:47.431876  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:47.931598  161014 type.go:168] "Request Body" body=""
	I1009 19:07:47.931682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:47.932028  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:48.431835  161014 type.go:168] "Request Body" body=""
	I1009 19:07:48.431919  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:48.432273  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:48.931089  161014 type.go:168] "Request Body" body=""
	I1009 19:07:48.931179  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:48.931527  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:49.431272  161014 type.go:168] "Request Body" body=""
	I1009 19:07:49.431350  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:49.431715  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:49.931579  161014 type.go:168] "Request Body" body=""
	I1009 19:07:49.931664  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:49.932040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:49.932107  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:50.431582  161014 type.go:168] "Request Body" body=""
	I1009 19:07:50.431662  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:50.432003  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:50.931872  161014 type.go:168] "Request Body" body=""
	I1009 19:07:50.931951  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:50.932322  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:51.431016  161014 type.go:168] "Request Body" body=""
	I1009 19:07:51.431095  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:51.431478  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:51.931270  161014 type.go:168] "Request Body" body=""
	I1009 19:07:51.931349  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:51.931734  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:52.431662  161014 type.go:168] "Request Body" body=""
	I1009 19:07:52.431743  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:52.432165  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:52.432255  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:52.931027  161014 type.go:168] "Request Body" body=""
	I1009 19:07:52.931111  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:52.931524  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:53.431299  161014 type.go:168] "Request Body" body=""
	I1009 19:07:53.431409  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:53.431777  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:53.931692  161014 type.go:168] "Request Body" body=""
	I1009 19:07:53.931802  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:53.932188  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:54.431033  161014 type.go:168] "Request Body" body=""
	I1009 19:07:54.431116  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:54.431506  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:54.931262  161014 type.go:168] "Request Body" body=""
	I1009 19:07:54.931371  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:54.931818  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:54.931896  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:55.431748  161014 type.go:168] "Request Body" body=""
	I1009 19:07:55.431839  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:55.432227  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:55.931001  161014 type.go:168] "Request Body" body=""
	I1009 19:07:55.931091  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:55.931464  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:56.431257  161014 type.go:168] "Request Body" body=""
	I1009 19:07:56.431342  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:56.431727  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:56.931602  161014 type.go:168] "Request Body" body=""
	I1009 19:07:56.931701  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:56.932081  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:56.932152  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:57.430910  161014 type.go:168] "Request Body" body=""
	I1009 19:07:57.431002  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:57.431362  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:57.931308  161014 type.go:168] "Request Body" body=""
	I1009 19:07:57.931413  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:57.931773  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:58.431643  161014 type.go:168] "Request Body" body=""
	I1009 19:07:58.431802  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:58.432134  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:58.931081  161014 type.go:168] "Request Body" body=""
	I1009 19:07:58.931169  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:58.931540  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:59.431310  161014 type.go:168] "Request Body" body=""
	I1009 19:07:59.431416  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:59.431835  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:59.431910  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:59.931738  161014 type.go:168] "Request Body" body=""
	I1009 19:07:59.931826  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:59.932198  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:00.430977  161014 type.go:168] "Request Body" body=""
	I1009 19:08:00.431073  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:00.431459  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:00.931240  161014 type.go:168] "Request Body" body=""
	I1009 19:08:00.931327  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:00.931726  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:01.431608  161014 type.go:168] "Request Body" body=""
	I1009 19:08:01.431703  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:01.432081  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:01.432155  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:01.930901  161014 type.go:168] "Request Body" body=""
	I1009 19:08:01.930998  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:01.931353  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:02.431155  161014 type.go:168] "Request Body" body=""
	I1009 19:08:02.431246  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:02.431683  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:02.931507  161014 type.go:168] "Request Body" body=""
	I1009 19:08:02.931648  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:02.932004  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:03.431604  161014 type.go:168] "Request Body" body=""
	I1009 19:08:03.431682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:03.432043  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:03.930851  161014 type.go:168] "Request Body" body=""
	I1009 19:08:03.930932  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:03.931328  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:03.931434  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:04.431148  161014 type.go:168] "Request Body" body=""
	I1009 19:08:04.431226  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:04.431671  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:04.931497  161014 type.go:168] "Request Body" body=""
	I1009 19:08:04.931576  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:04.931933  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:05.431818  161014 type.go:168] "Request Body" body=""
	I1009 19:08:05.431913  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:05.432337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:05.931111  161014 type.go:168] "Request Body" body=""
	I1009 19:08:05.931188  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:05.931598  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:05.931665  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:06.431433  161014 type.go:168] "Request Body" body=""
	I1009 19:08:06.431518  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:06.431897  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:06.931739  161014 type.go:168] "Request Body" body=""
	I1009 19:08:06.931825  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:06.932190  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:07.431010  161014 type.go:168] "Request Body" body=""
	I1009 19:08:07.431098  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:07.431492  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:07.931321  161014 type.go:168] "Request Body" body=""
	I1009 19:08:07.931478  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:07.931847  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:07.931911  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:08.431736  161014 type.go:168] "Request Body" body=""
	I1009 19:08:08.431826  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:08.432199  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:08.931147  161014 type.go:168] "Request Body" body=""
	I1009 19:08:08.931256  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:08.931581  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:09.431348  161014 type.go:168] "Request Body" body=""
	I1009 19:08:09.431501  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:09.431847  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:09.931761  161014 type.go:168] "Request Body" body=""
	I1009 19:08:09.931868  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:09.932264  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:09.932358  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:10.431111  161014 type.go:168] "Request Body" body=""
	I1009 19:08:10.431226  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:10.431600  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:10.931402  161014 type.go:168] "Request Body" body=""
	I1009 19:08:10.931502  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:10.931871  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:11.431784  161014 type.go:168] "Request Body" body=""
	I1009 19:08:11.431872  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:11.432233  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:11.931048  161014 type.go:168] "Request Body" body=""
	I1009 19:08:11.931144  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:11.931576  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:12.431421  161014 type.go:168] "Request Body" body=""
	I1009 19:08:12.431503  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:12.431862  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:12.431928  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:12.931757  161014 type.go:168] "Request Body" body=""
	I1009 19:08:12.931854  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:12.932305  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:13.431097  161014 type.go:168] "Request Body" body=""
	I1009 19:08:13.431185  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:13.431628  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:13.931448  161014 type.go:168] "Request Body" body=""
	I1009 19:08:13.931544  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:13.931895  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:14.431813  161014 type.go:168] "Request Body" body=""
	I1009 19:08:14.431896  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:14.432325  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:14.432452  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:14.931193  161014 type.go:168] "Request Body" body=""
	I1009 19:08:14.931304  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:14.931724  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:15.431610  161014 type.go:168] "Request Body" body=""
	I1009 19:08:15.431784  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:15.432189  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:15.930996  161014 type.go:168] "Request Body" body=""
	I1009 19:08:15.931076  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:15.931476  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:16.431279  161014 type.go:168] "Request Body" body=""
	I1009 19:08:16.431364  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:16.431823  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:16.931708  161014 type.go:168] "Request Body" body=""
	I1009 19:08:16.931791  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:16.932165  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:16.932241  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:17.430990  161014 type.go:168] "Request Body" body=""
	I1009 19:08:17.431074  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:17.431506  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:17.931431  161014 type.go:168] "Request Body" body=""
	I1009 19:08:17.931525  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:17.931892  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:18.431806  161014 type.go:168] "Request Body" body=""
	I1009 19:08:18.431897  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:18.432299  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:18.931120  161014 type.go:168] "Request Body" body=""
	I1009 19:08:18.931214  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:18.931614  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:19.431514  161014 type.go:168] "Request Body" body=""
	I1009 19:08:19.431606  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:19.432047  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:19.432124  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:19.931598  161014 type.go:168] "Request Body" body=""
	I1009 19:08:19.931691  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:19.932042  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:20.431891  161014 type.go:168] "Request Body" body=""
	I1009 19:08:20.431971  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:20.432405  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:20.931180  161014 type.go:168] "Request Body" body=""
	I1009 19:08:20.931263  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:20.931621  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:21.431543  161014 type.go:168] "Request Body" body=""
	I1009 19:08:21.431622  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:21.432021  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:21.931880  161014 type.go:168] "Request Body" body=""
	I1009 19:08:21.931973  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:21.932344  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:21.932455  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:22.431220  161014 type.go:168] "Request Body" body=""
	I1009 19:08:22.431312  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:22.431735  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:22.931611  161014 type.go:168] "Request Body" body=""
	I1009 19:08:22.931692  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:22.932047  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:23.430844  161014 type.go:168] "Request Body" body=""
	I1009 19:08:23.430928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:23.431339  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:23.931177  161014 type.go:168] "Request Body" body=""
	I1009 19:08:23.931280  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:23.931703  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:24.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:08:24.431623  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:24.432029  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:24.432099  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:24.930846  161014 type.go:168] "Request Body" body=""
	I1009 19:08:24.930940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:24.931301  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:25.431093  161014 type.go:168] "Request Body" body=""
	I1009 19:08:25.431180  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:25.431586  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:25.931364  161014 type.go:168] "Request Body" body=""
	I1009 19:08:25.931490  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:25.931848  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:26.431757  161014 type.go:168] "Request Body" body=""
	I1009 19:08:26.431844  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:26.432286  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:26.432356  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:26.931111  161014 type.go:168] "Request Body" body=""
	I1009 19:08:26.931219  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:26.931654  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:27.431562  161014 type.go:168] "Request Body" body=""
	I1009 19:08:27.431657  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:27.432104  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:27.931917  161014 type.go:168] "Request Body" body=""
	I1009 19:08:27.932031  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:27.932479  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:28.431253  161014 type.go:168] "Request Body" body=""
	I1009 19:08:28.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:28.431741  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:28.931701  161014 type.go:168] "Request Body" body=""
	I1009 19:08:28.931793  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:28.932147  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:28.932231  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:29.430994  161014 type.go:168] "Request Body" body=""
	I1009 19:08:29.431076  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:29.431507  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:29.931284  161014 type.go:168] "Request Body" body=""
	I1009 19:08:29.931372  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:29.931786  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:30.431725  161014 type.go:168] "Request Body" body=""
	I1009 19:08:30.431807  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:30.432196  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:30.930995  161014 type.go:168] "Request Body" body=""
	I1009 19:08:30.931086  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:30.931489  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:31.431293  161014 type.go:168] "Request Body" body=""
	I1009 19:08:31.431407  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:31.431802  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:31.431899  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:31.931763  161014 type.go:168] "Request Body" body=""
	I1009 19:08:31.931847  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:31.932233  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:32.431064  161014 type.go:168] "Request Body" body=""
	I1009 19:08:32.431143  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:32.431569  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:32.931367  161014 type.go:168] "Request Body" body=""
	I1009 19:08:32.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:32.931834  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:33.431666  161014 type.go:168] "Request Body" body=""
	I1009 19:08:33.431746  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:33.432152  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:33.432228  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:33.931085  161014 type.go:168] "Request Body" body=""
	I1009 19:08:33.931187  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:33.931603  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:34.431399  161014 type.go:168] "Request Body" body=""
	I1009 19:08:34.431485  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:34.431891  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:34.931782  161014 type.go:168] "Request Body" body=""
	I1009 19:08:34.931877  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:34.932244  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:35.431033  161014 type.go:168] "Request Body" body=""
	I1009 19:08:35.431120  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:35.431472  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:35.931247  161014 type.go:168] "Request Body" body=""
	I1009 19:08:35.931336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:35.931759  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:35.931829  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:36.431682  161014 type.go:168] "Request Body" body=""
	I1009 19:08:36.431785  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:36.432193  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:36.931013  161014 type.go:168] "Request Body" body=""
	I1009 19:08:36.931099  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:36.931470  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:37.431265  161014 type.go:168] "Request Body" body=""
	I1009 19:08:37.431370  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:37.431819  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:37.931612  161014 type.go:168] "Request Body" body=""
	I1009 19:08:37.931700  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:37.932079  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:37.932145  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:38.430913  161014 type.go:168] "Request Body" body=""
	I1009 19:08:38.431022  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:38.431519  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:38.931233  161014 type.go:168] "Request Body" body=""
	I1009 19:08:38.931319  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:38.931686  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:39.431521  161014 type.go:168] "Request Body" body=""
	I1009 19:08:39.431627  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:39.432049  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:39.931904  161014 type.go:168] "Request Body" body=""
	I1009 19:08:39.932008  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:39.932353  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:39.932451  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:40.431183  161014 type.go:168] "Request Body" body=""
	I1009 19:08:40.431282  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:40.431716  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:40.931624  161014 type.go:168] "Request Body" body=""
	I1009 19:08:40.931713  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:40.932079  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:41.430889  161014 type.go:168] "Request Body" body=""
	I1009 19:08:41.430987  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:41.431423  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:41.931240  161014 type.go:168] "Request Body" body=""
	I1009 19:08:41.931324  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:41.931700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:42.431534  161014 type.go:168] "Request Body" body=""
	I1009 19:08:42.431639  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:42.432064  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:42.432142  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:42.930885  161014 type.go:168] "Request Body" body=""
	I1009 19:08:42.930975  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:42.931354  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:43.431227  161014 type.go:168] "Request Body" body=""
	I1009 19:08:43.431323  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:43.431715  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:43.931552  161014 type.go:168] "Request Body" body=""
	I1009 19:08:43.931632  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:43.931992  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:44.431828  161014 type.go:168] "Request Body" body=""
	I1009 19:08:44.431924  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:44.432325  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:44.432415  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:44.931136  161014 type.go:168] "Request Body" body=""
	I1009 19:08:44.931245  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:44.931664  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:45.431554  161014 type.go:168] "Request Body" body=""
	I1009 19:08:45.431649  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:45.432042  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:45.931929  161014 type.go:168] "Request Body" body=""
	I1009 19:08:45.932032  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:45.932456  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:46.431215  161014 type.go:168] "Request Body" body=""
	I1009 19:08:46.431303  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:46.431675  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:46.931516  161014 type.go:168] "Request Body" body=""
	I1009 19:08:46.931612  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:46.932033  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:46.932105  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:47.431930  161014 type.go:168] "Request Body" body=""
	I1009 19:08:47.432024  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:47.432404  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:47.931253  161014 type.go:168] "Request Body" body=""
	I1009 19:08:47.931351  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:47.931772  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:48.431679  161014 type.go:168] "Request Body" body=""
	I1009 19:08:48.431767  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:48.432147  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:48.930986  161014 type.go:168] "Request Body" body=""
	I1009 19:08:48.931073  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:48.931466  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:49.431246  161014 type.go:168] "Request Body" body=""
	I1009 19:08:49.431332  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:49.431709  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:49.431791  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:49.931583  161014 type.go:168] "Request Body" body=""
	I1009 19:08:49.931665  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:49.932043  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:50.430854  161014 type.go:168] "Request Body" body=""
	I1009 19:08:50.430942  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:50.431310  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:50.931059  161014 type.go:168] "Request Body" body=""
	I1009 19:08:50.931138  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:50.931534  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:51.431317  161014 type.go:168] "Request Body" body=""
	I1009 19:08:51.431423  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:51.431783  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:51.431860  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:51.931678  161014 type.go:168] "Request Body" body=""
	I1009 19:08:51.931770  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:51.932161  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:52.430940  161014 type.go:168] "Request Body" body=""
	I1009 19:08:52.431043  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:52.431471  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:52.931234  161014 type.go:168] "Request Body" body=""
	I1009 19:08:52.931317  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:52.931697  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:53.431539  161014 type.go:168] "Request Body" body=""
	I1009 19:08:53.431626  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:53.432040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:53.432105  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:53.931898  161014 type.go:168] "Request Body" body=""
	I1009 19:08:53.931980  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:53.932334  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:54.431123  161014 type.go:168] "Request Body" body=""
	I1009 19:08:54.431206  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:54.431572  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:54.931007  161014 type.go:168] "Request Body" body=""
	I1009 19:08:54.931094  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:54.931473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:55.431255  161014 type.go:168] "Request Body" body=""
	I1009 19:08:55.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:55.431719  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:55.931595  161014 type.go:168] "Request Body" body=""
	I1009 19:08:55.931684  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:55.932059  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:55.932132  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:56.430905  161014 type.go:168] "Request Body" body=""
	I1009 19:08:56.430996  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:56.431358  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:56.931139  161014 type.go:168] "Request Body" body=""
	I1009 19:08:56.931225  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:56.931614  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:57.431422  161014 type.go:168] "Request Body" body=""
	I1009 19:08:57.431520  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:57.431890  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:57.931717  161014 type.go:168] "Request Body" body=""
	I1009 19:08:57.931804  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:57.932243  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:57.932309  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:58.431442  161014 type.go:168] "Request Body" body=""
	I1009 19:08:58.431719  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:58.432305  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:58.931643  161014 type.go:168] "Request Body" body=""
	I1009 19:08:58.931736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:58.932089  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:59.431793  161014 type.go:168] "Request Body" body=""
	I1009 19:08:59.431868  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:59.432216  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:59.931889  161014 type.go:168] "Request Body" body=""
	I1009 19:08:59.931982  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:59.932322  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:59.932430  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:00.430938  161014 type.go:168] "Request Body" body=""
	I1009 19:09:00.431025  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:00.431413  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:00.930953  161014 type.go:168] "Request Body" body=""
	I1009 19:09:00.931042  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:00.931443  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:01.431021  161014 type.go:168] "Request Body" body=""
	I1009 19:09:01.431114  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:01.431513  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:01.931074  161014 type.go:168] "Request Body" body=""
	I1009 19:09:01.931154  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:01.931545  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:02.431340  161014 type.go:168] "Request Body" body=""
	I1009 19:09:02.431449  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:02.431830  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:02.431902  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:02.931823  161014 type.go:168] "Request Body" body=""
	I1009 19:09:02.931913  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:02.932314  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:03.431114  161014 type.go:168] "Request Body" body=""
	I1009 19:09:03.431193  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:03.431578  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:03.931464  161014 type.go:168] "Request Body" body=""
	I1009 19:09:03.931552  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:03.931986  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:04.431831  161014 type.go:168] "Request Body" body=""
	I1009 19:09:04.431934  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:04.432314  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:04.432398  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:04.931129  161014 type.go:168] "Request Body" body=""
	I1009 19:09:04.931216  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:04.931674  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:05.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:09:05.431611  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:05.432021  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:05.931854  161014 type.go:168] "Request Body" body=""
	I1009 19:09:05.931940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:05.932334  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:06.431088  161014 type.go:168] "Request Body" body=""
	I1009 19:09:06.431167  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:06.431515  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:06.931278  161014 type.go:168] "Request Body" body=""
	I1009 19:09:06.931362  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:06.931748  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:06.931816  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:07.431644  161014 type.go:168] "Request Body" body=""
	I1009 19:09:07.431747  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:07.432178  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:07.931866  161014 type.go:168] "Request Body" body=""
	I1009 19:09:07.931950  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:07.932290  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:08.431090  161014 type.go:168] "Request Body" body=""
	I1009 19:09:08.431172  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:08.431650  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:08.931429  161014 type.go:168] "Request Body" body=""
	I1009 19:09:08.931507  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:08.931843  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:08.931909  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:09.431805  161014 type.go:168] "Request Body" body=""
	I1009 19:09:09.431897  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:09.432328  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:09.931113  161014 type.go:168] "Request Body" body=""
	I1009 19:09:09.931194  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:09.931569  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:10.431340  161014 type.go:168] "Request Body" body=""
	I1009 19:09:10.431473  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:10.431864  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:10.931696  161014 type.go:168] "Request Body" body=""
	I1009 19:09:10.931778  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:10.932062  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:10.932116  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:11.430855  161014 type.go:168] "Request Body" body=""
	I1009 19:09:11.430938  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:11.431371  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:11.931153  161014 type.go:168] "Request Body" body=""
	I1009 19:09:11.931230  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:11.931601  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:12.431453  161014 type.go:168] "Request Body" body=""
	I1009 19:09:12.431539  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:12.431968  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:12.931803  161014 type.go:168] "Request Body" body=""
	I1009 19:09:12.931890  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:12.932230  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:12.932299  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:13.431049  161014 type.go:168] "Request Body" body=""
	I1009 19:09:13.431141  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:13.431581  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:13.931422  161014 type.go:168] "Request Body" body=""
	I1009 19:09:13.931504  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:13.931849  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:14.431710  161014 type.go:168] "Request Body" body=""
	I1009 19:09:14.431803  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:14.432200  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:14.930978  161014 type.go:168] "Request Body" body=""
	I1009 19:09:14.931058  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:14.931421  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:15.431205  161014 type.go:168] "Request Body" body=""
	I1009 19:09:15.431290  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:15.431792  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:15.431868  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:15.931738  161014 type.go:168] "Request Body" body=""
	I1009 19:09:15.931822  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:15.932171  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:16.430949  161014 type.go:168] "Request Body" body=""
	I1009 19:09:16.431033  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:16.431370  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:16.931168  161014 type.go:168] "Request Body" body=""
	I1009 19:09:16.931244  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:16.931635  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:17.431446  161014 type.go:168] "Request Body" body=""
	I1009 19:09:17.431534  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:17.431909  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:17.431982  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:17.931495  161014 type.go:168] "Request Body" body=""
	I1009 19:09:17.931580  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:17.931927  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:18.431744  161014 type.go:168] "Request Body" body=""
	I1009 19:09:18.431828  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:18.432200  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:18.931151  161014 type.go:168] "Request Body" body=""
	I1009 19:09:18.931250  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:18.931652  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:19.431441  161014 type.go:168] "Request Body" body=""
	I1009 19:09:19.431529  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:19.431984  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:19.432070  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:19.931848  161014 type.go:168] "Request Body" body=""
	I1009 19:09:19.931941  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:19.932309  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:20.431088  161014 type.go:168] "Request Body" body=""
	I1009 19:09:20.431164  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:20.431555  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:20.931352  161014 type.go:168] "Request Body" body=""
	I1009 19:09:20.931455  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:20.931826  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:21.431728  161014 type.go:168] "Request Body" body=""
	I1009 19:09:21.431814  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:21.432175  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:21.432242  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:21.930958  161014 type.go:168] "Request Body" body=""
	I1009 19:09:21.931034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:21.931435  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:22.431185  161014 type.go:168] "Request Body" body=""
	I1009 19:09:22.431270  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:22.431704  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:22.931192  161014 type.go:168] "Request Body" body=""
	I1009 19:09:22.931273  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:22.931689  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:23.431502  161014 type.go:168] "Request Body" body=""
	I1009 19:09:23.431580  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:23.431996  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:23.930860  161014 type.go:168] "Request Body" body=""
	I1009 19:09:23.930955  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:23.931370  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:23.931478  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:24.431207  161014 type.go:168] "Request Body" body=""
	I1009 19:09:24.431286  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:24.431650  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:24.931519  161014 type.go:168] "Request Body" body=""
	I1009 19:09:24.931614  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:24.931998  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:25.431913  161014 type.go:168] "Request Body" body=""
	I1009 19:09:25.432001  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:25.432369  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:25.931194  161014 type.go:168] "Request Body" body=""
	I1009 19:09:25.931275  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:25.931711  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:25.931786  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:26.431609  161014 type.go:168] "Request Body" body=""
	I1009 19:09:26.431690  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:26.432050  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:26.931918  161014 type.go:168] "Request Body" body=""
	I1009 19:09:26.932020  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:26.932417  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:27.431187  161014 type.go:168] "Request Body" body=""
	I1009 19:09:27.431268  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:27.431666  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:27.931530  161014 type.go:168] "Request Body" body=""
	I1009 19:09:27.931614  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:27.931987  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:27.932055  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:28.431844  161014 type.go:168] "Request Body" body=""
	I1009 19:09:28.431933  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:28.432359  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:28.931165  161014 type.go:168] "Request Body" body=""
	I1009 19:09:28.931247  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:28.931680  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:29.431569  161014 type.go:168] "Request Body" body=""
	I1009 19:09:29.431650  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:29.432040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:29.931942  161014 type.go:168] "Request Body" body=""
	I1009 19:09:29.932027  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:29.932374  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:29.932460  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:30.431194  161014 type.go:168] "Request Body" body=""
	I1009 19:09:30.431282  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:30.431737  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:30.931616  161014 type.go:168] "Request Body" body=""
	I1009 19:09:30.931725  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:30.932121  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:31.430987  161014 type.go:168] "Request Body" body=""
	I1009 19:09:31.431078  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:31.431478  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:31.931232  161014 type.go:168] "Request Body" body=""
	I1009 19:09:31.931315  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:31.931680  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:32.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:09:32.431613  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:32.431992  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:32.432063  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:32.931853  161014 type.go:168] "Request Body" body=""
	I1009 19:09:32.931950  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:32.932297  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:33.431044  161014 type.go:168] "Request Body" body=""
	I1009 19:09:33.431132  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:33.431543  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:33.931355  161014 type.go:168] "Request Body" body=""
	I1009 19:09:33.931458  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:33.931818  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:34.431650  161014 type.go:168] "Request Body" body=""
	I1009 19:09:34.431733  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:34.432148  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:34.432213  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:34.930967  161014 type.go:168] "Request Body" body=""
	I1009 19:09:34.931063  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:34.931473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:35.431283  161014 type.go:168] "Request Body" body=""
	I1009 19:09:35.431373  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:35.431779  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:35.931628  161014 type.go:168] "Request Body" body=""
	I1009 19:09:35.931736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:35.932084  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:36.430910  161014 type.go:168] "Request Body" body=""
	I1009 19:09:36.431012  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:36.431444  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:36.931340  161014 type.go:168] "Request Body" body=""
	I1009 19:09:36.931451  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:36.931825  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:36.931893  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:37.431740  161014 type.go:168] "Request Body" body=""
	I1009 19:09:37.431822  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:37.432174  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:37.931117  161014 type.go:168] "Request Body" body=""
	I1009 19:09:37.931218  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:37.931587  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:38.431359  161014 type.go:168] "Request Body" body=""
	I1009 19:09:38.431467  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:38.431870  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:38.931821  161014 type.go:168] "Request Body" body=""
	I1009 19:09:38.931902  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:38.932265  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:38.932358  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:39.431099  161014 type.go:168] "Request Body" body=""
	I1009 19:09:39.431179  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:39.431570  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:39.931428  161014 type.go:168] "Request Body" body=""
	I1009 19:09:39.931517  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:39.931883  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:40.431747  161014 type.go:168] "Request Body" body=""
	I1009 19:09:40.431830  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:40.432201  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:40.931026  161014 type.go:168] "Request Body" body=""
	I1009 19:09:40.931135  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:40.931594  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:41.431370  161014 type.go:168] "Request Body" body=""
	I1009 19:09:41.431476  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:41.431863  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:41.431951  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:41.931795  161014 type.go:168] "Request Body" body=""
	I1009 19:09:41.931873  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:41.932227  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:42.431026  161014 type.go:168] "Request Body" body=""
	I1009 19:09:42.431112  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:42.431474  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:42.931233  161014 type.go:168] "Request Body" body=""
	I1009 19:09:42.931308  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:42.931720  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:43.431625  161014 type.go:168] "Request Body" body=""
	I1009 19:09:43.431708  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:43.432076  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:43.432152  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:43.930876  161014 type.go:168] "Request Body" body=""
	I1009 19:09:43.930965  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:43.931363  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:44.431159  161014 type.go:168] "Request Body" body=""
	I1009 19:09:44.431252  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:44.431660  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:44.931539  161014 type.go:168] "Request Body" body=""
	I1009 19:09:44.931619  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:44.932022  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:45.431848  161014 type.go:168] "Request Body" body=""
	I1009 19:09:45.431928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:45.432294  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:45.432362  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:45.931071  161014 type.go:168] "Request Body" body=""
	I1009 19:09:45.931154  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:45.931550  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:46.431330  161014 type.go:168] "Request Body" body=""
	I1009 19:09:46.431433  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:46.431785  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:46.931618  161014 type.go:168] "Request Body" body=""
	I1009 19:09:46.931717  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:46.932083  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:47.430878  161014 type.go:168] "Request Body" body=""
	I1009 19:09:47.430967  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:47.431308  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:47.931113  161014 type.go:168] "Request Body" body=""
	I1009 19:09:47.931193  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:47.931575  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:47.931645  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:48.431350  161014 type.go:168] "Request Body" body=""
	I1009 19:09:48.431448  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:48.431909  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:48.931846  161014 type.go:168] "Request Body" body=""
	I1009 19:09:48.931928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:48.932292  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:49.431050  161014 type.go:168] "Request Body" body=""
	I1009 19:09:49.431125  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:49.431508  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:49.931265  161014 type.go:168] "Request Body" body=""
	I1009 19:09:49.931345  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:49.931748  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:49.931814  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:50.431652  161014 type.go:168] "Request Body" body=""
	I1009 19:09:50.431747  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:50.432126  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:50.930878  161014 type.go:168] "Request Body" body=""
	I1009 19:09:50.930959  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:50.931336  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:51.431163  161014 type.go:168] "Request Body" body=""
	I1009 19:09:51.431258  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:51.431622  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:51.931358  161014 type.go:168] "Request Body" body=""
	I1009 19:09:51.931464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:51.931852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:51.931924  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:52.431703  161014 type.go:168] "Request Body" body=""
	I1009 19:09:52.431795  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:52.432179  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:52.930954  161014 type.go:168] "Request Body" body=""
	I1009 19:09:52.931050  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:52.931459  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:53.431224  161014 type.go:168] "Request Body" body=""
	I1009 19:09:53.431365  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:53.431740  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:53.931748  161014 type.go:168] "Request Body" body=""
	I1009 19:09:53.931831  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:53.932191  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:53.932260  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:54.430975  161014 type.go:168] "Request Body" body=""
	I1009 19:09:54.431053  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:54.431476  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:54.931249  161014 type.go:168] "Request Body" body=""
	I1009 19:09:54.931341  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:54.931729  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:55.431607  161014 type.go:168] "Request Body" body=""
	I1009 19:09:55.431691  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:55.432110  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:55.930917  161014 type.go:168] "Request Body" body=""
	I1009 19:09:55.931003  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:55.931362  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:56.431145  161014 type.go:168] "Request Body" body=""
	I1009 19:09:56.431222  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:56.431639  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:56.431710  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:56.931556  161014 type.go:168] "Request Body" body=""
	I1009 19:09:56.931656  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:56.932062  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:57.431903  161014 type.go:168] "Request Body" body=""
	I1009 19:09:57.431989  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:57.432337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:57.931402  161014 type.go:168] "Request Body" body=""
	I1009 19:09:57.931482  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:57.931843  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:58.431701  161014 type.go:168] "Request Body" body=""
	I1009 19:09:58.431790  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:58.432145  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:58.432218  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:58.931088  161014 type.go:168] "Request Body" body=""
	I1009 19:09:58.931175  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:58.931505  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:59.431298  161014 type.go:168] "Request Body" body=""
	I1009 19:09:59.431395  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:59.431751  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:59.931602  161014 type.go:168] "Request Body" body=""
	I1009 19:09:59.931702  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:59.932051  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:00.430856  161014 type.go:168] "Request Body" body=""
	I1009 19:10:00.430958  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:00.431337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:00.931121  161014 type.go:168] "Request Body" body=""
	I1009 19:10:00.931220  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:00.931593  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:00.931674  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:01.431423  161014 type.go:168] "Request Body" body=""
	I1009 19:10:01.431509  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:01.431863  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:01.931614  161014 type.go:168] "Request Body" body=""
	I1009 19:10:01.931705  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:01.932079  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:02.430870  161014 type.go:168] "Request Body" body=""
	I1009 19:10:02.430952  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:02.431333  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:02.931135  161014 type.go:168] "Request Body" body=""
	I1009 19:10:02.931235  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:02.931629  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:02.931714  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:03.431520  161014 type.go:168] "Request Body" body=""
	I1009 19:10:03.431673  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:03.432032  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:03.930864  161014 type.go:168] "Request Body" body=""
	I1009 19:10:03.930947  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:03.931344  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:04.431204  161014 type.go:168] "Request Body" body=""
	I1009 19:10:04.431296  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:04.431704  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:04.931600  161014 type.go:168] "Request Body" body=""
	I1009 19:10:04.931678  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:04.932040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:04.932106  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:05.430899  161014 type.go:168] "Request Body" body=""
	I1009 19:10:05.431003  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:05.431407  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:05.931195  161014 type.go:168] "Request Body" body=""
	I1009 19:10:05.931270  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:05.931635  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:06.431451  161014 type.go:168] "Request Body" body=""
	I1009 19:10:06.431534  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:06.431953  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:06.931837  161014 type.go:168] "Request Body" body=""
	I1009 19:10:06.931927  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:06.932279  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:06.932345  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:07.431076  161014 type.go:168] "Request Body" body=""
	I1009 19:10:07.431164  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:07.431506  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:07.931394  161014 type.go:168] "Request Body" body=""
	I1009 19:10:07.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:07.931835  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:08.431660  161014 type.go:168] "Request Body" body=""
	I1009 19:10:08.431741  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:08.432102  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:08.930920  161014 type.go:168] "Request Body" body=""
	I1009 19:10:08.930998  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:08.931426  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:09.431179  161014 type.go:168] "Request Body" body=""
	I1009 19:10:09.431260  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:09.431640  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:09.431713  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:09.931551  161014 type.go:168] "Request Body" body=""
	I1009 19:10:09.931636  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:09.932085  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:10.430911  161014 type.go:168] "Request Body" body=""
	I1009 19:10:10.431004  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:10.431408  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:10.931180  161014 type.go:168] "Request Body" body=""
	I1009 19:10:10.931260  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:10.931651  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:11.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:10:11.431610  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:11.432017  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:11.432093  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:11.930846  161014 type.go:168] "Request Body" body=""
	I1009 19:10:11.930928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:11.931300  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:12.431099  161014 type.go:168] "Request Body" body=""
	I1009 19:10:12.431188  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:12.431615  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:12.931577  161014 type.go:168] "Request Body" body=""
	I1009 19:10:12.931661  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:12.932029  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:13.431910  161014 type.go:168] "Request Body" body=""
	I1009 19:10:13.431995  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:13.432350  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:13.432438  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:13.931217  161014 type.go:168] "Request Body" body=""
	I1009 19:10:13.931302  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:13.931678  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:14.431548  161014 type.go:168] "Request Body" body=""
	I1009 19:10:14.431638  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:14.432110  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:14.930876  161014 type.go:168] "Request Body" body=""
	I1009 19:10:14.930963  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:14.931343  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:15.431156  161014 type.go:168] "Request Body" body=""
	I1009 19:10:15.431244  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:15.431618  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:15.931358  161014 type.go:168] "Request Body" body=""
	I1009 19:10:15.931451  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:15.931817  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:15.931883  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:16.431696  161014 type.go:168] "Request Body" body=""
	I1009 19:10:16.431794  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:16.432158  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:16.930930  161014 type.go:168] "Request Body" body=""
	I1009 19:10:16.931010  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:16.931370  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:17.431200  161014 type.go:168] "Request Body" body=""
	I1009 19:10:17.431290  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:17.431663  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:17.931525  161014 type.go:168] "Request Body" body=""
	I1009 19:10:17.931613  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:17.932012  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:17.932077  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:18.431980  161014 type.go:168] "Request Body" body=""
	I1009 19:10:18.432065  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:18.432498  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:18.931327  161014 type.go:168] "Request Body" body=""
	I1009 19:10:18.931435  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:18.931798  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:19.431649  161014 type.go:168] "Request Body" body=""
	I1009 19:10:19.431736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:19.432133  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:19.930941  161014 type.go:168] "Request Body" body=""
	I1009 19:10:19.931034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:19.931426  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:20.431191  161014 type.go:168] "Request Body" body=""
	I1009 19:10:20.431277  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:20.431702  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:20.431786  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:20.931649  161014 type.go:168] "Request Body" body=""
	I1009 19:10:20.931743  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:20.932145  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:21.430998  161014 type.go:168] "Request Body" body=""
	I1009 19:10:21.431093  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:21.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:21.931294  161014 type.go:168] "Request Body" body=""
	I1009 19:10:21.931404  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:21.931769  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:22.431592  161014 type.go:168] "Request Body" body=""
	I1009 19:10:22.431689  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:22.432061  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:22.432138  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:22.930890  161014 type.go:168] "Request Body" body=""
	I1009 19:10:22.930981  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:22.931355  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:23.431123  161014 type.go:168] "Request Body" body=""
	I1009 19:10:23.431202  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:23.431562  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:23.931393  161014 type.go:168] "Request Body" body=""
	I1009 19:10:23.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:23.931849  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:24.431681  161014 type.go:168] "Request Body" body=""
	I1009 19:10:24.431765  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:24.432120  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:24.432200  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:24.930948  161014 type.go:168] "Request Body" body=""
	I1009 19:10:24.931038  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:24.931411  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:25.431172  161014 type.go:168] "Request Body" body=""
	I1009 19:10:25.431263  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:25.431645  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:25.931517  161014 type.go:168] "Request Body" body=""
	I1009 19:10:25.931604  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:25.931950  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:26.431795  161014 type.go:168] "Request Body" body=""
	I1009 19:10:26.431877  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:26.432259  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:26.432327  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:26.931108  161014 type.go:168] "Request Body" body=""
	I1009 19:10:26.931192  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:26.931561  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:27.431372  161014 type.go:168] "Request Body" body=""
	I1009 19:10:27.431478  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:27.431852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:27.931767  161014 type.go:168] "Request Body" body=""
	I1009 19:10:27.931844  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:27.932243  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:28.431036  161014 type.go:168] "Request Body" body=""
	I1009 19:10:28.431114  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:28.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:28.931317  161014 type.go:168] "Request Body" body=""
	I1009 19:10:28.931415  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:28.931802  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:28.931870  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:29.431682  161014 type.go:168] "Request Body" body=""
	I1009 19:10:29.431764  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:29.432158  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:29.930948  161014 type.go:168] "Request Body" body=""
	I1009 19:10:29.931029  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:29.931432  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:30.431237  161014 type.go:168] "Request Body" body=""
	I1009 19:10:30.431318  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:30.431700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:30.931592  161014 type.go:168] "Request Body" body=""
	I1009 19:10:30.931686  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:30.932066  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:30.932138  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:31.430865  161014 type.go:168] "Request Body" body=""
	I1009 19:10:31.430944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:31.431326  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:31.931100  161014 type.go:168] "Request Body" body=""
	I1009 19:10:31.931183  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:31.931557  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:32.431408  161014 type.go:168] "Request Body" body=""
	I1009 19:10:32.431492  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:32.431860  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:32.931727  161014 type.go:168] "Request Body" body=""
	I1009 19:10:32.931827  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:32.932201  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:32.932275  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:33.431035  161014 type.go:168] "Request Body" body=""
	I1009 19:10:33.431127  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:33.431509  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:33.931347  161014 type.go:168] "Request Body" body=""
	I1009 19:10:33.931452  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:33.931805  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:34.431659  161014 type.go:168] "Request Body" body=""
	I1009 19:10:34.431767  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:34.432157  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:34.930935  161014 type.go:168] "Request Body" body=""
	I1009 19:10:34.931032  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:34.931422  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:35.431188  161014 type.go:168] "Request Body" body=""
	I1009 19:10:35.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:35.431638  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:35.431700  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:35.931496  161014 type.go:168] "Request Body" body=""
	I1009 19:10:35.931583  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:35.931982  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:36.431843  161014 type.go:168] "Request Body" body=""
	I1009 19:10:36.431930  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:36.432287  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:36.931012  161014 type.go:168] "Request Body" body=""
	I1009 19:10:36.931101  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:36.931479  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:37.431254  161014 type.go:168] "Request Body" body=""
	I1009 19:10:37.431336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:37.431708  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:37.431785  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:37.931498  161014 type.go:168] "Request Body" body=""
	I1009 19:10:37.931578  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:37.931952  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:38.431802  161014 type.go:168] "Request Body" body=""
	I1009 19:10:38.431878  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:38.432242  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:38.931094  161014 type.go:168] "Request Body" body=""
	I1009 19:10:38.931171  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:38.931535  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:39.431342  161014 type.go:168] "Request Body" body=""
	I1009 19:10:39.431467  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:39.431828  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:39.431894  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:39.931678  161014 type.go:168] "Request Body" body=""
	I1009 19:10:39.931769  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:39.932114  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:40.430894  161014 type.go:168] "Request Body" body=""
	I1009 19:10:40.431002  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:40.431338  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:40.931086  161014 type.go:168] "Request Body" body=""
	I1009 19:10:40.931169  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:40.931549  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:41.431354  161014 type.go:168] "Request Body" body=""
	I1009 19:10:41.431484  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:41.431936  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:41.432009  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:41.931856  161014 type.go:168] "Request Body" body=""
	I1009 19:10:41.931944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:41.932342  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:42.431239  161014 type.go:168] "Request Body" body=""
	I1009 19:10:42.431343  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:42.431776  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:42.931642  161014 type.go:168] "Request Body" body=""
	I1009 19:10:42.931724  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:42.932139  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:43.430955  161014 type.go:168] "Request Body" body=""
	I1009 19:10:43.431055  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:43.431482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:43.931286  161014 type.go:168] "Request Body" body=""
	I1009 19:10:43.931364  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:43.931761  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:43.931841  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:44.431651  161014 type.go:168] "Request Body" body=""
	I1009 19:10:44.431739  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:44.432136  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:44.930918  161014 type.go:168] "Request Body" body=""
	I1009 19:10:44.930997  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:44.931368  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:45.431210  161014 type.go:168] "Request Body" body=""
	I1009 19:10:45.431301  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:45.431803  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:45.931785  161014 type.go:168] "Request Body" body=""
	I1009 19:10:45.931879  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:45.932234  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:45.932298  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:46.431044  161014 type.go:168] "Request Body" body=""
	I1009 19:10:46.431130  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:46.431509  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:46.931298  161014 type.go:168] "Request Body" body=""
	I1009 19:10:46.931409  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:46.931768  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:47.431684  161014 type.go:168] "Request Body" body=""
	I1009 19:10:47.431772  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:47.432192  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:47.930892  161014 type.go:168] "Request Body" body=""
	I1009 19:10:47.931082  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:47.931491  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:48.431254  161014 type.go:168] "Request Body" body=""
	I1009 19:10:48.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:48.431754  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:48.431817  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:48.931519  161014 type.go:168] "Request Body" body=""
	I1009 19:10:48.931605  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:48.931963  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:49.431903  161014 type.go:168] "Request Body" body=""
	I1009 19:10:49.431995  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:49.432442  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:49.931216  161014 type.go:168] "Request Body" body=""
	I1009 19:10:49.931299  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:49.931685  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:50.431513  161014 type.go:168] "Request Body" body=""
	I1009 19:10:50.431600  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:50.432015  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:50.432094  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:50.931903  161014 type.go:168] "Request Body" body=""
	I1009 19:10:50.931985  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:50.932356  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:51.431151  161014 type.go:168] "Request Body" body=""
	I1009 19:10:51.431235  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:51.431691  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:51.931607  161014 type.go:168] "Request Body" body=""
	I1009 19:10:51.931704  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:51.932066  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:52.430855  161014 type.go:168] "Request Body" body=""
	I1009 19:10:52.430936  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:52.431352  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:52.931144  161014 type.go:168] "Request Body" body=""
	I1009 19:10:52.931236  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:52.931629  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:52.931694  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:53.431504  161014 type.go:168] "Request Body" body=""
	I1009 19:10:53.431592  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:53.431978  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:53.930879  161014 type.go:168] "Request Body" body=""
	I1009 19:10:53.930990  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:53.931420  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:54.431176  161014 type.go:168] "Request Body" body=""
	I1009 19:10:54.431256  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:54.431696  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:54.931517  161014 type.go:168] "Request Body" body=""
	I1009 19:10:54.931600  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:54.932006  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:54.932070  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:55.431919  161014 type.go:168] "Request Body" body=""
	I1009 19:10:55.432013  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:55.432499  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:55.931252  161014 type.go:168] "Request Body" body=""
	I1009 19:10:55.931340  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:55.931770  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:56.431601  161014 type.go:168] "Request Body" body=""
	I1009 19:10:56.431705  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:56.432063  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:56.930857  161014 type.go:168] "Request Body" body=""
	I1009 19:10:56.930944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:56.931308  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:57.431063  161014 type.go:168] "Request Body" body=""
	I1009 19:10:57.431152  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:57.431557  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:57.431627  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:57.931435  161014 type.go:168] "Request Body" body=""
	I1009 19:10:57.931520  161014 node_ready.go:38] duration metric: took 6m0.000788191s for node "functional-158523" to be "Ready" ...
	I1009 19:10:57.934316  161014 out.go:203] 
	W1009 19:10:57.935818  161014 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:10:57.935834  161014 out.go:285] * 
	W1009 19:10:57.937485  161014 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:10:57.938875  161014 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:11:07 functional-158523 crio[2962]: time="2025-10-09T19:11:07.618786437Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=c5293c42-0ea8-48f5-8e7d-1bbf5077e421 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:07 functional-158523 crio[2962]: time="2025-10-09T19:11:07.619735412Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=813d15c0-3d2a-46c4-92a5-ab810b5cd161 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:07 functional-158523 crio[2962]: time="2025-10-09T19:11:07.620759724Z" level=info msg="Creating container: kube-system/etcd-functional-158523/etcd" id=0244892b-300d-4d0e-9b0e-abdde6a301d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:11:07 functional-158523 crio[2962]: time="2025-10-09T19:11:07.621029883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:11:07 functional-158523 crio[2962]: time="2025-10-09T19:11:07.625752239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:11:07 functional-158523 crio[2962]: time="2025-10-09T19:11:07.626195132Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:11:07 functional-158523 crio[2962]: time="2025-10-09T19:11:07.641477516Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0244892b-300d-4d0e-9b0e-abdde6a301d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:11:07 functional-158523 crio[2962]: time="2025-10-09T19:11:07.643024547Z" level=info msg="createCtr: deleting container ID e30e29260a7133c53026632423b7e99930f2f63aa126f52c8a57d9c8e31cdea5 from idIndex" id=0244892b-300d-4d0e-9b0e-abdde6a301d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:11:07 functional-158523 crio[2962]: time="2025-10-09T19:11:07.643074263Z" level=info msg="createCtr: removing container e30e29260a7133c53026632423b7e99930f2f63aa126f52c8a57d9c8e31cdea5" id=0244892b-300d-4d0e-9b0e-abdde6a301d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:11:07 functional-158523 crio[2962]: time="2025-10-09T19:11:07.643110277Z" level=info msg="createCtr: deleting container e30e29260a7133c53026632423b7e99930f2f63aa126f52c8a57d9c8e31cdea5 from storage" id=0244892b-300d-4d0e-9b0e-abdde6a301d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:11:07 functional-158523 crio[2962]: time="2025-10-09T19:11:07.64556168Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-158523_kube-system_8f4f9df5924bbaa4e1ec7f60e6576647_0" id=0244892b-300d-4d0e-9b0e-abdde6a301d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:11:08 functional-158523 crio[2962]: time="2025-10-09T19:11:08.314246588Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=104975bf-2b48-4e4d-a86e-25ef03ca74ca name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:08 functional-158523 crio[2962]: time="2025-10-09T19:11:08.615737223Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=45beb944-3390-4ed5-af26-767e709564ee name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:08 functional-158523 crio[2962]: time="2025-10-09T19:11:08.615900524Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=45beb944-3390-4ed5-af26-767e709564ee name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:08 functional-158523 crio[2962]: time="2025-10-09T19:11:08.615958367Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=45beb944-3390-4ed5-af26-767e709564ee name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.137430266Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=af9c3420-8b23-4257-8355-007a5da08d11 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.137868818Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=af9c3420-8b23-4257-8355-007a5da08d11 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.137933141Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=af9c3420-8b23-4257-8355-007a5da08d11 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.179160176Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=32a3a6df-6778-4740-bd0d-d8b7567cba27 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.179317093Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=32a3a6df-6778-4740-bd0d-d8b7567cba27 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.179349349Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=32a3a6df-6778-4740-bd0d-d8b7567cba27 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.20621085Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5a2d42a7-ded8-4fca-8f38-b709675531e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.206368283Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=5a2d42a7-ded8-4fca-8f38-b709675531e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.206431755Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=5a2d42a7-ded8-4fca-8f38-b709675531e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.678541017Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1415653d-2625-44ae-837a-84f84cc9d152 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:11:11.105169    5317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:11:11.106894    5317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:11:11.107291    5317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:11:11.108997    5317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:11:11.109429    5317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:11:11 up 53 min,  0 user,  load average: 0.30, 0.19, 9.27
	Linux functional-158523 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:11:03 functional-158523 kubelet[1810]:  > podSandboxID="8e9b8d6f8f5607eade31cf47137dabb7c979b7a05be5d892419ed28c4be5e916"
	Oct 09 19:11:03 functional-158523 kubelet[1810]: E1009 19:11:03.644418    1810 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:11:03 functional-158523 kubelet[1810]:         container kube-scheduler start failed in pod kube-scheduler-functional-158523_kube-system(589c70f36d169281ef056387fc3a74a2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:11:03 functional-158523 kubelet[1810]:  > logger="UnhandledError"
	Oct 09 19:11:03 functional-158523 kubelet[1810]: E1009 19:11:03.644469    1810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-158523" podUID="589c70f36d169281ef056387fc3a74a2"
	Oct 09 19:11:04 functional-158523 kubelet[1810]: E1009 19:11:04.618981    1810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:11:04 functional-158523 kubelet[1810]: E1009 19:11:04.644152    1810 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:11:04 functional-158523 kubelet[1810]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:11:04 functional-158523 kubelet[1810]:  > podSandboxID="1577439806fcd9d603693a21a1b77ea4da9104d29c8aecd0dc0681165a9e1de2"
	Oct 09 19:11:04 functional-158523 kubelet[1810]: E1009 19:11:04.644281    1810 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:11:04 functional-158523 kubelet[1810]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-158523_kube-system(08a2248bbc397b9ed927890e2073120b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:11:04 functional-158523 kubelet[1810]:  > logger="UnhandledError"
	Oct 09 19:11:04 functional-158523 kubelet[1810]: E1009 19:11:04.644333    1810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-158523" podUID="08a2248bbc397b9ed927890e2073120b"
	Oct 09 19:11:06 functional-158523 kubelet[1810]: E1009 19:11:06.310265    1810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-158523?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 19:11:06 functional-158523 kubelet[1810]: I1009 19:11:06.519623    1810 kubelet_node_status.go:75] "Attempting to register node" node="functional-158523"
	Oct 09 19:11:06 functional-158523 kubelet[1810]: E1009 19:11:06.520042    1810 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-158523"
	Oct 09 19:11:06 functional-158523 kubelet[1810]: E1009 19:11:06.593842    1810 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-158523.186ce7d3e1d25377\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-158523.186ce7d3e1d25377  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-158523,UID:functional-158523,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-158523 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-158523,},FirstTimestamp:2025-10-09 19:00:51.607794551 +0000 UTC m=+0.591054211,LastTimestamp:2025-10-09 19:00:51.609818572 +0000 UTC m=+0.593078239,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-158523,}"
	Oct 09 19:11:07 functional-158523 kubelet[1810]: E1009 19:11:07.618261    1810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:11:07 functional-158523 kubelet[1810]: E1009 19:11:07.645885    1810 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:11:07 functional-158523 kubelet[1810]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:11:07 functional-158523 kubelet[1810]:  > podSandboxID="c5f59cf39316c74dd65d2925d309cbd6e6fdc48c022b61803b3c6d8d973e588c"
	Oct 09 19:11:07 functional-158523 kubelet[1810]: E1009 19:11:07.646021    1810 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:11:07 functional-158523 kubelet[1810]:         container etcd start failed in pod etcd-functional-158523_kube-system(8f4f9df5924bbaa4e1ec7f60e6576647): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:11:07 functional-158523 kubelet[1810]:  > logger="UnhandledError"
	Oct 09 19:11:07 functional-158523 kubelet[1810]: E1009 19:11:07.646063    1810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-158523" podUID="8f4f9df5924bbaa4e1ec7f60e6576647"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523: exit status 2 (319.669813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-158523" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (2.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-158523 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-158523 get pods: exit status 1 (101.331465ms)

                                                
                                                
** stderr ** 
	E1009 19:11:12.036435  166995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:11:12.036782  166995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:11:12.038274  166995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:11:12.038579  166995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:11:12.039954  166995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-158523 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-158523
helpers_test.go:243: (dbg) docker inspect functional-158523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	        "Created": "2025-10-09T18:56:39.519997973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:56:39.555178961Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hosts",
	        "LogPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4-json.log",
	        "Name": "/functional-158523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-158523:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-158523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	                "LowerDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-158523",
	                "Source": "/var/lib/docker/volumes/functional-158523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-158523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-158523",
	                "name.minikube.sigs.k8s.io": "functional-158523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50f61b8c99f766763e9e30d37584e537ddf10f74215a382a4221cbe7f7c2e821",
	            "SandboxKey": "/var/run/docker/netns/50f61b8c99f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-158523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:a1:34:24:78:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6347eb7b6b2842e266f51d3873c8aa169a4287ea52510c3d62dc4ed41012963c",
	                    "EndpointID": "eb1ada23cbc058d52037dd94574627dd1b0fedf455f291e656f050bf06881952",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-158523",
	                        "dea3735c567c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523: exit status 2 (301.419807ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-158523 logs -n 25: (1.012258038s)
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-656427 --log_dir /tmp/nospam-656427 pause                                                              │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                            │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                            │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                            │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                               │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                               │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                               │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ delete  │ -p nospam-656427                                                                                              │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p functional-158523 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ -p functional-158523 --alsologtostderr -v=8                                                                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ cache   │ functional-158523 cache add registry.k8s.io/pause:3.1                                                         │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache add registry.k8s.io/pause:3.3                                                         │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache add registry.k8s.io/pause:latest                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache add minikube-local-cache-test:functional-158523                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache delete minikube-local-cache-test:functional-158523                                    │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl images                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	│ cache   │ functional-158523 cache reload                                                                                │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ kubectl │ functional-158523 kubectl -- --context functional-158523 get pods                                             │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:04:53
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:04:53.859600  161014 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:04:53.859894  161014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:53.859904  161014 out.go:374] Setting ErrFile to fd 2...
	I1009 19:04:53.859909  161014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:53.860103  161014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:04:53.860622  161014 out.go:368] Setting JSON to false
	I1009 19:04:53.861569  161014 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2843,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:04:53.861680  161014 start.go:143] virtualization: kvm guest
	I1009 19:04:53.864538  161014 out.go:179] * [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:04:53.866020  161014 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:04:53.866041  161014 notify.go:221] Checking for updates...
	I1009 19:04:53.868520  161014 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:04:53.869799  161014 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:53.871001  161014 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:04:53.872350  161014 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:04:53.873695  161014 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:04:53.875515  161014 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:53.875628  161014 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:04:53.899122  161014 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:04:53.899239  161014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:04:53.961702  161014 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:04:53.950772825 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:04:53.961810  161014 docker.go:319] overlay module found
	I1009 19:04:53.963901  161014 out.go:179] * Using the docker driver based on existing profile
	I1009 19:04:53.965359  161014 start.go:309] selected driver: docker
	I1009 19:04:53.965397  161014 start.go:930] validating driver "docker" against &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:04:53.965505  161014 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:04:53.965601  161014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:04:54.024534  161014 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:04:54.014787007 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:04:54.025138  161014 cni.go:84] Creating CNI manager for ""
	I1009 19:04:54.025189  161014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:04:54.025246  161014 start.go:353] cluster config:
	{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:04:54.027519  161014 out.go:179] * Starting "functional-158523" primary control-plane node in "functional-158523" cluster
	I1009 19:04:54.028967  161014 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:04:54.030473  161014 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:04:54.031821  161014 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:04:54.031876  161014 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:04:54.031885  161014 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:04:54.031986  161014 cache.go:58] Caching tarball of preloaded images
	I1009 19:04:54.032085  161014 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:04:54.032098  161014 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:04:54.032213  161014 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/config.json ...
	I1009 19:04:54.053026  161014 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:04:54.053045  161014 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:04:54.053063  161014 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:04:54.053096  161014 start.go:361] acquireMachinesLock for functional-158523: {Name:mk995713bbd40419f859c4a8640c8ada0479020c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:04:54.053186  161014 start.go:365] duration metric: took 46.429µs to acquireMachinesLock for "functional-158523"
	I1009 19:04:54.053209  161014 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:04:54.053220  161014 fix.go:55] fixHost starting: 
	I1009 19:04:54.053511  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:54.070674  161014 fix.go:113] recreateIfNeeded on functional-158523: state=Running err=<nil>
	W1009 19:04:54.070714  161014 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:04:54.072611  161014 out.go:252] * Updating the running docker "functional-158523" container ...
	I1009 19:04:54.072644  161014 machine.go:93] provisionDockerMachine start ...
	I1009 19:04:54.072732  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:54.089158  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:54.089398  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:54.089417  161014 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:04:54.234516  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 19:04:54.234543  161014 ubuntu.go:182] provisioning hostname "functional-158523"
	I1009 19:04:54.234606  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:54.252690  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:54.252942  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:54.252960  161014 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-158523 && echo "functional-158523" | sudo tee /etc/hostname
	I1009 19:04:54.409130  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 19:04:54.409240  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:54.428592  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:54.428819  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:54.428839  161014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-158523' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-158523/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-158523' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:04:54.575221  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:04:54.575248  161014 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:04:54.575298  161014 ubuntu.go:190] setting up certificates
	I1009 19:04:54.575313  161014 provision.go:84] configureAuth start
	I1009 19:04:54.575366  161014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 19:04:54.593157  161014 provision.go:143] copyHostCerts
	I1009 19:04:54.593200  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:04:54.593229  161014 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:04:54.593244  161014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:04:54.593315  161014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:04:54.593491  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:04:54.593517  161014 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:04:54.593524  161014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:04:54.593557  161014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:04:54.593615  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:04:54.593632  161014 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:04:54.593638  161014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:04:54.593693  161014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:04:54.593752  161014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.functional-158523 san=[127.0.0.1 192.168.49.2 functional-158523 localhost minikube]
	I1009 19:04:54.998231  161014 provision.go:177] copyRemoteCerts
	I1009 19:04:54.998297  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:04:54.998335  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.016505  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.120020  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:04:55.120077  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:04:55.138116  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:04:55.138187  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:04:55.157031  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:04:55.157100  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:04:55.176045  161014 provision.go:87] duration metric: took 600.715143ms to configureAuth
	I1009 19:04:55.176080  161014 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:04:55.176245  161014 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:55.176357  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.194450  161014 main.go:141] libmachine: Using SSH client type: native
	I1009 19:04:55.194679  161014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:04:55.194701  161014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:04:55.467764  161014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:04:55.467789  161014 machine.go:96] duration metric: took 1.395134259s to provisionDockerMachine
	I1009 19:04:55.467804  161014 start.go:294] postStartSetup for "functional-158523" (driver="docker")
	I1009 19:04:55.467821  161014 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:04:55.467882  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:04:55.467922  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.486353  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.591117  161014 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:04:55.594855  161014 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1009 19:04:55.594886  161014 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1009 19:04:55.594893  161014 command_runner.go:130] > VERSION_ID="12"
	I1009 19:04:55.594900  161014 command_runner.go:130] > VERSION="12 (bookworm)"
	I1009 19:04:55.594907  161014 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1009 19:04:55.594911  161014 command_runner.go:130] > ID=debian
	I1009 19:04:55.594915  161014 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1009 19:04:55.594920  161014 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1009 19:04:55.594926  161014 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1009 19:04:55.594992  161014 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:04:55.595011  161014 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:04:55.595023  161014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:04:55.595090  161014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:04:55.595204  161014 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:04:55.595227  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:04:55.595320  161014 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts -> hosts in /etc/test/nested/copy/141519
	I1009 19:04:55.595330  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts -> /etc/test/nested/copy/141519/hosts
	I1009 19:04:55.595388  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/141519
	I1009 19:04:55.603244  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:04:55.621701  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts --> /etc/test/nested/copy/141519/hosts (40 bytes)
	I1009 19:04:55.640532  161014 start.go:297] duration metric: took 172.708538ms for postStartSetup
	I1009 19:04:55.640625  161014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:04:55.640672  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.658424  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.758913  161014 command_runner.go:130] > 38%
	I1009 19:04:55.759004  161014 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:04:55.763762  161014 command_runner.go:130] > 182G
	I1009 19:04:55.763807  161014 fix.go:57] duration metric: took 1.710584464s for fixHost
	I1009 19:04:55.763821  161014 start.go:84] releasing machines lock for "functional-158523", held for 1.710622732s
	I1009 19:04:55.763882  161014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 19:04:55.781557  161014 ssh_runner.go:195] Run: cat /version.json
	I1009 19:04:55.781620  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.781568  161014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:04:55.781740  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:55.800026  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.800289  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:55.899840  161014 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1009 19:04:55.953125  161014 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 19:04:55.955421  161014 ssh_runner.go:195] Run: systemctl --version
	I1009 19:04:55.962169  161014 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1009 19:04:55.962207  161014 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1009 19:04:55.962422  161014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:04:56.001789  161014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 19:04:56.006364  161014 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 19:04:56.006710  161014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:04:56.006818  161014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:04:56.015207  161014 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:04:56.015234  161014 start.go:496] detecting cgroup driver to use...
	I1009 19:04:56.015270  161014 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:04:56.015326  161014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:04:56.030444  161014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:04:56.043355  161014 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:04:56.043439  161014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:04:56.058903  161014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:04:56.072794  161014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:04:56.155598  161014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:04:56.243484  161014 docker.go:234] disabling docker service ...
	I1009 19:04:56.243560  161014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:04:56.258472  161014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:04:56.271168  161014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:04:56.357916  161014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:04:56.444044  161014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:04:56.457436  161014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:04:56.471973  161014 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1009 19:04:56.472020  161014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:04:56.472074  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.481231  161014 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:04:56.481304  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.490735  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.499743  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.508857  161014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:04:56.517176  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.525878  161014 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.534146  161014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.542852  161014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:04:56.549944  161014 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 19:04:56.550015  161014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:04:56.557444  161014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:04:56.640120  161014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:04:56.755858  161014 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:04:56.755937  161014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:04:56.760115  161014 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1009 19:04:56.760139  161014 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 19:04:56.760145  161014 command_runner.go:130] > Device: 0,59	Inode: 3908        Links: 1
	I1009 19:04:56.760152  161014 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 19:04:56.760157  161014 command_runner.go:130] > Access: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760162  161014 command_runner.go:130] > Modify: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760167  161014 command_runner.go:130] > Change: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760171  161014 command_runner.go:130] >  Birth: 2025-10-09 19:04:56.737667059 +0000
	I1009 19:04:56.760191  161014 start.go:564] Will wait 60s for crictl version
	I1009 19:04:56.760238  161014 ssh_runner.go:195] Run: which crictl
	I1009 19:04:56.764068  161014 command_runner.go:130] > /usr/local/bin/crictl
	I1009 19:04:56.764145  161014 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:04:56.790045  161014 command_runner.go:130] > Version:  0.1.0
	I1009 19:04:56.790068  161014 command_runner.go:130] > RuntimeName:  cri-o
	I1009 19:04:56.790072  161014 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1009 19:04:56.790077  161014 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 19:04:56.790095  161014 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:04:56.790164  161014 ssh_runner.go:195] Run: crio --version
	I1009 19:04:56.817435  161014 command_runner.go:130] > crio version 1.34.1
	I1009 19:04:56.817460  161014 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 19:04:56.817466  161014 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 19:04:56.817470  161014 command_runner.go:130] >    GitTreeState:   dirty
	I1009 19:04:56.817475  161014 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 19:04:56.817480  161014 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 19:04:56.817483  161014 command_runner.go:130] >    Compiler:       gc
	I1009 19:04:56.817488  161014 command_runner.go:130] >    Platform:       linux/amd64
	I1009 19:04:56.817492  161014 command_runner.go:130] >    Linkmode:       static
	I1009 19:04:56.817496  161014 command_runner.go:130] >    BuildTags:
	I1009 19:04:56.817499  161014 command_runner.go:130] >      static
	I1009 19:04:56.817503  161014 command_runner.go:130] >      netgo
	I1009 19:04:56.817506  161014 command_runner.go:130] >      osusergo
	I1009 19:04:56.817510  161014 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 19:04:56.817514  161014 command_runner.go:130] >      seccomp
	I1009 19:04:56.817518  161014 command_runner.go:130] >      apparmor
	I1009 19:04:56.817521  161014 command_runner.go:130] >      selinux
	I1009 19:04:56.817525  161014 command_runner.go:130] >    LDFlags:          unknown
	I1009 19:04:56.817531  161014 command_runner.go:130] >    SeccompEnabled:   true
	I1009 19:04:56.817535  161014 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 19:04:56.819047  161014 ssh_runner.go:195] Run: crio --version
	I1009 19:04:56.846110  161014 command_runner.go:130] > crio version 1.34.1
	I1009 19:04:56.846137  161014 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 19:04:56.846145  161014 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 19:04:56.846154  161014 command_runner.go:130] >    GitTreeState:   dirty
	I1009 19:04:56.846160  161014 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 19:04:56.846166  161014 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 19:04:56.846172  161014 command_runner.go:130] >    Compiler:       gc
	I1009 19:04:56.846179  161014 command_runner.go:130] >    Platform:       linux/amd64
	I1009 19:04:56.846185  161014 command_runner.go:130] >    Linkmode:       static
	I1009 19:04:56.846193  161014 command_runner.go:130] >    BuildTags:
	I1009 19:04:56.846202  161014 command_runner.go:130] >      static
	I1009 19:04:56.846209  161014 command_runner.go:130] >      netgo
	I1009 19:04:56.846218  161014 command_runner.go:130] >      osusergo
	I1009 19:04:56.846226  161014 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 19:04:56.846238  161014 command_runner.go:130] >      seccomp
	I1009 19:04:56.846246  161014 command_runner.go:130] >      apparmor
	I1009 19:04:56.846252  161014 command_runner.go:130] >      selinux
	I1009 19:04:56.846262  161014 command_runner.go:130] >    LDFlags:          unknown
	I1009 19:04:56.846270  161014 command_runner.go:130] >    SeccompEnabled:   true
	I1009 19:04:56.846280  161014 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 19:04:56.849910  161014 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:04:56.851471  161014 cli_runner.go:164] Run: docker network inspect functional-158523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:04:56.867982  161014 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:04:56.872517  161014 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1009 19:04:56.872627  161014 kubeadm.go:883] updating cluster {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:04:56.872731  161014 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:04:56.872790  161014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:04:56.904568  161014 command_runner.go:130] > {
	I1009 19:04:56.904591  161014 command_runner.go:130] >   "images":  [
	I1009 19:04:56.904595  161014 command_runner.go:130] >     {
	I1009 19:04:56.904603  161014 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 19:04:56.904608  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904617  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 19:04:56.904622  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904628  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.904652  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 19:04:56.904667  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 19:04:56.904673  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904681  161014 command_runner.go:130] >       "size":  "109379124",
	I1009 19:04:56.904688  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.904694  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.904700  161014 command_runner.go:130] >     },
	I1009 19:04:56.904706  161014 command_runner.go:130] >     {
	I1009 19:04:56.904719  161014 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 19:04:56.904728  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904736  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 19:04:56.904744  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904754  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.904771  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 19:04:56.904786  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 19:04:56.904794  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904799  161014 command_runner.go:130] >       "size":  "31470524",
	I1009 19:04:56.904805  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.904814  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.904822  161014 command_runner.go:130] >     },
	I1009 19:04:56.904831  161014 command_runner.go:130] >     {
	I1009 19:04:56.904841  161014 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 19:04:56.904851  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904861  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 19:04:56.904870  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904879  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.904890  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 19:04:56.904903  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 19:04:56.904912  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904919  161014 command_runner.go:130] >       "size":  "76103547",
	I1009 19:04:56.904928  161014 command_runner.go:130] >       "username":  "nonroot",
	I1009 19:04:56.904938  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.904946  161014 command_runner.go:130] >     },
	I1009 19:04:56.904951  161014 command_runner.go:130] >     {
	I1009 19:04:56.904963  161014 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 19:04:56.904972  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.904982  161014 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 19:04:56.904988  161014 command_runner.go:130] >       ],
	I1009 19:04:56.904994  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905015  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 19:04:56.905029  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 19:04:56.905038  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905048  161014 command_runner.go:130] >       "size":  "195976448",
	I1009 19:04:56.905056  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905062  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905071  161014 command_runner.go:130] >       },
	I1009 19:04:56.905082  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905092  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905096  161014 command_runner.go:130] >     },
	I1009 19:04:56.905099  161014 command_runner.go:130] >     {
	I1009 19:04:56.905111  161014 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 19:04:56.905120  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905128  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 19:04:56.905137  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905147  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905160  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 19:04:56.905174  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 19:04:56.905182  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905188  161014 command_runner.go:130] >       "size":  "89046001",
	I1009 19:04:56.905195  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905199  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905207  161014 command_runner.go:130] >       },
	I1009 19:04:56.905218  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905228  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905235  161014 command_runner.go:130] >     },
	I1009 19:04:56.905240  161014 command_runner.go:130] >     {
	I1009 19:04:56.905253  161014 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 19:04:56.905262  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905273  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 19:04:56.905280  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905284  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905299  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 19:04:56.905315  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 19:04:56.905324  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905333  161014 command_runner.go:130] >       "size":  "76004181",
	I1009 19:04:56.905342  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905352  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905360  161014 command_runner.go:130] >       },
	I1009 19:04:56.905367  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905393  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905402  161014 command_runner.go:130] >     },
	I1009 19:04:56.905407  161014 command_runner.go:130] >     {
	I1009 19:04:56.905417  161014 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 19:04:56.905427  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905438  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 19:04:56.905446  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905456  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905470  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 19:04:56.905482  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 19:04:56.905490  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905500  161014 command_runner.go:130] >       "size":  "73138073",
	I1009 19:04:56.905510  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905516  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905525  161014 command_runner.go:130] >     },
	I1009 19:04:56.905533  161014 command_runner.go:130] >     {
	I1009 19:04:56.905543  161014 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 19:04:56.905552  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905563  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 19:04:56.905571  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905579  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905590  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 19:04:56.905613  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 19:04:56.905622  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905629  161014 command_runner.go:130] >       "size":  "53844823",
	I1009 19:04:56.905637  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905647  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.905655  161014 command_runner.go:130] >       },
	I1009 19:04:56.905664  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905673  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.905681  161014 command_runner.go:130] >     },
	I1009 19:04:56.905690  161014 command_runner.go:130] >     {
	I1009 19:04:56.905696  161014 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 19:04:56.905705  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.905712  161014 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 19:04:56.905721  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905727  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.905740  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 19:04:56.905754  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 19:04:56.905762  161014 command_runner.go:130] >       ],
	I1009 19:04:56.905772  161014 command_runner.go:130] >       "size":  "742092",
	I1009 19:04:56.905783  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.905791  161014 command_runner.go:130] >         "value":  "65535"
	I1009 19:04:56.905795  161014 command_runner.go:130] >       },
	I1009 19:04:56.905802  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.905808  161014 command_runner.go:130] >       "pinned":  true
	I1009 19:04:56.905816  161014 command_runner.go:130] >     }
	I1009 19:04:56.905822  161014 command_runner.go:130] >   ]
	I1009 19:04:56.905830  161014 command_runner.go:130] > }
	I1009 19:04:56.906014  161014 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:04:56.906027  161014 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:04:56.906079  161014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:04:56.933720  161014 command_runner.go:130] > {
	I1009 19:04:56.933747  161014 command_runner.go:130] >   "images":  [
	I1009 19:04:56.933753  161014 command_runner.go:130] >     {
	I1009 19:04:56.933769  161014 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 19:04:56.933774  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.933781  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 19:04:56.933788  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933794  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.933805  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 19:04:56.933821  161014 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 19:04:56.933827  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933835  161014 command_runner.go:130] >       "size":  "109379124",
	I1009 19:04:56.933845  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.933855  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.933861  161014 command_runner.go:130] >     },
	I1009 19:04:56.933864  161014 command_runner.go:130] >     {
	I1009 19:04:56.933873  161014 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 19:04:56.933879  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.933890  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 19:04:56.933899  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933906  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.933921  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 19:04:56.933935  161014 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 19:04:56.933944  161014 command_runner.go:130] >       ],
	I1009 19:04:56.933951  161014 command_runner.go:130] >       "size":  "31470524",
	I1009 19:04:56.933960  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.933970  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.933975  161014 command_runner.go:130] >     },
	I1009 19:04:56.933979  161014 command_runner.go:130] >     {
	I1009 19:04:56.933992  161014 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 19:04:56.934002  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934016  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 19:04:56.934029  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934036  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934050  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 19:04:56.934065  161014 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 19:04:56.934072  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934079  161014 command_runner.go:130] >       "size":  "76103547",
	I1009 19:04:56.934086  161014 command_runner.go:130] >       "username":  "nonroot",
	I1009 19:04:56.934090  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934097  161014 command_runner.go:130] >     },
	I1009 19:04:56.934102  161014 command_runner.go:130] >     {
	I1009 19:04:56.934116  161014 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 19:04:56.934126  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934137  161014 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 19:04:56.934145  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934151  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934164  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 19:04:56.934177  161014 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 19:04:56.934183  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934188  161014 command_runner.go:130] >       "size":  "195976448",
	I1009 19:04:56.934197  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934207  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934216  161014 command_runner.go:130] >       },
	I1009 19:04:56.934263  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934275  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934279  161014 command_runner.go:130] >     },
	I1009 19:04:56.934283  161014 command_runner.go:130] >     {
	I1009 19:04:56.934296  161014 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 19:04:56.934306  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934315  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 19:04:56.934323  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934329  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934344  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 19:04:56.934358  161014 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 19:04:56.934372  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934397  161014 command_runner.go:130] >       "size":  "89046001",
	I1009 19:04:56.934408  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934416  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934425  161014 command_runner.go:130] >       },
	I1009 19:04:56.934435  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934444  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934452  161014 command_runner.go:130] >     },
	I1009 19:04:56.934461  161014 command_runner.go:130] >     {
	I1009 19:04:56.934473  161014 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 19:04:56.934480  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934486  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 19:04:56.934493  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934499  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934514  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 19:04:56.934529  161014 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 19:04:56.934538  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934545  161014 command_runner.go:130] >       "size":  "76004181",
	I1009 19:04:56.934554  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934560  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934566  161014 command_runner.go:130] >       },
	I1009 19:04:56.934572  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934578  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934581  161014 command_runner.go:130] >     },
	I1009 19:04:56.934584  161014 command_runner.go:130] >     {
	I1009 19:04:56.934592  161014 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 19:04:56.934597  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934605  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 19:04:56.934610  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934616  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934629  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 19:04:56.934643  161014 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 19:04:56.934652  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934660  161014 command_runner.go:130] >       "size":  "73138073",
	I1009 19:04:56.934667  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934677  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934681  161014 command_runner.go:130] >     },
	I1009 19:04:56.934684  161014 command_runner.go:130] >     {
	I1009 19:04:56.934690  161014 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 19:04:56.934696  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934704  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 19:04:56.934709  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934716  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934726  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 19:04:56.934747  161014 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 19:04:56.934753  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934772  161014 command_runner.go:130] >       "size":  "53844823",
	I1009 19:04:56.934779  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934786  161014 command_runner.go:130] >         "value":  "0"
	I1009 19:04:56.934795  161014 command_runner.go:130] >       },
	I1009 19:04:56.934801  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934811  161014 command_runner.go:130] >       "pinned":  false
	I1009 19:04:56.934816  161014 command_runner.go:130] >     },
	I1009 19:04:56.934824  161014 command_runner.go:130] >     {
	I1009 19:04:56.934834  161014 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 19:04:56.934843  161014 command_runner.go:130] >       "repoTags":  [
	I1009 19:04:56.934850  161014 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 19:04:56.934858  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934862  161014 command_runner.go:130] >       "repoDigests":  [
	I1009 19:04:56.934871  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 19:04:56.934886  161014 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 19:04:56.934895  161014 command_runner.go:130] >       ],
	I1009 19:04:56.934902  161014 command_runner.go:130] >       "size":  "742092",
	I1009 19:04:56.934910  161014 command_runner.go:130] >       "uid":  {
	I1009 19:04:56.934917  161014 command_runner.go:130] >         "value":  "65535"
	I1009 19:04:56.934926  161014 command_runner.go:130] >       },
	I1009 19:04:56.934934  161014 command_runner.go:130] >       "username":  "",
	I1009 19:04:56.934943  161014 command_runner.go:130] >       "pinned":  true
	I1009 19:04:56.934947  161014 command_runner.go:130] >     }
	I1009 19:04:56.934950  161014 command_runner.go:130] >   ]
	I1009 19:04:56.934953  161014 command_runner.go:130] > }
	I1009 19:04:56.935095  161014 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:04:56.935110  161014 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:04:56.935118  161014 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 19:04:56.935242  161014 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-158523 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:04:56.935323  161014 ssh_runner.go:195] Run: crio config
	I1009 19:04:56.978304  161014 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1009 19:04:56.978336  161014 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1009 19:04:56.978345  161014 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1009 19:04:56.978350  161014 command_runner.go:130] > #
	I1009 19:04:56.978359  161014 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1009 19:04:56.978367  161014 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1009 19:04:56.978390  161014 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1009 19:04:56.978401  161014 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1009 19:04:56.978406  161014 command_runner.go:130] > # reload'.
	I1009 19:04:56.978415  161014 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1009 19:04:56.978436  161014 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1009 19:04:56.978448  161014 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1009 19:04:56.978458  161014 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1009 19:04:56.978464  161014 command_runner.go:130] > [crio]
	I1009 19:04:56.978476  161014 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1009 19:04:56.978484  161014 command_runner.go:130] > # containers images, in this directory.
	I1009 19:04:56.978495  161014 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1009 19:04:56.978505  161014 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1009 19:04:56.978514  161014 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1009 19:04:56.978523  161014 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1009 19:04:56.978532  161014 command_runner.go:130] > # imagestore = ""
	I1009 19:04:56.978541  161014 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1009 19:04:56.978554  161014 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1009 19:04:56.978561  161014 command_runner.go:130] > # storage_driver = "overlay"
	I1009 19:04:56.978571  161014 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1009 19:04:56.978581  161014 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1009 19:04:56.978591  161014 command_runner.go:130] > # storage_option = [
	I1009 19:04:56.978596  161014 command_runner.go:130] > # ]
	I1009 19:04:56.978605  161014 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1009 19:04:56.978616  161014 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1009 19:04:56.978623  161014 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1009 19:04:56.978631  161014 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1009 19:04:56.978640  161014 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1009 19:04:56.978647  161014 command_runner.go:130] > # always happen on a node reboot
	I1009 19:04:56.978654  161014 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1009 19:04:56.978669  161014 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1009 19:04:56.978682  161014 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1009 19:04:56.978689  161014 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1009 19:04:56.978695  161014 command_runner.go:130] > # version_file_persist = ""
	I1009 19:04:56.978714  161014 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1009 19:04:56.978728  161014 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1009 19:04:56.978737  161014 command_runner.go:130] > # internal_wipe = true
	I1009 19:04:56.978748  161014 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1009 19:04:56.978760  161014 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1009 19:04:56.978772  161014 command_runner.go:130] > # internal_repair = true
	I1009 19:04:56.978780  161014 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1009 19:04:56.978794  161014 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1009 19:04:56.978805  161014 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1009 19:04:56.978815  161014 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1009 19:04:56.978825  161014 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1009 19:04:56.978833  161014 command_runner.go:130] > [crio.api]
	I1009 19:04:56.978841  161014 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1009 19:04:56.978851  161014 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1009 19:04:56.978860  161014 command_runner.go:130] > # IP address on which the stream server will listen.
	I1009 19:04:56.978870  161014 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1009 19:04:56.978881  161014 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1009 19:04:56.978892  161014 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1009 19:04:56.978901  161014 command_runner.go:130] > # stream_port = "0"
	I1009 19:04:56.978910  161014 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1009 19:04:56.978920  161014 command_runner.go:130] > # stream_enable_tls = false
	I1009 19:04:56.978929  161014 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1009 19:04:56.978954  161014 command_runner.go:130] > # stream_idle_timeout = ""
	I1009 19:04:56.978969  161014 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1009 19:04:56.978978  161014 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1009 19:04:56.978985  161014 command_runner.go:130] > # stream_tls_cert = ""
	I1009 19:04:56.978999  161014 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1009 19:04:56.979007  161014 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1009 19:04:56.979013  161014 command_runner.go:130] > # stream_tls_key = ""
	I1009 19:04:56.979025  161014 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1009 19:04:56.979039  161014 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1009 19:04:56.979049  161014 command_runner.go:130] > # automatically pick up the changes.
	I1009 19:04:56.979058  161014 command_runner.go:130] > # stream_tls_ca = ""
	I1009 19:04:56.979084  161014 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 19:04:56.979098  161014 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1009 19:04:56.979110  161014 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 19:04:56.979117  161014 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1009 19:04:56.979127  161014 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1009 19:04:56.979134  161014 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1009 19:04:56.979139  161014 command_runner.go:130] > [crio.runtime]
	I1009 19:04:56.979146  161014 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1009 19:04:56.979155  161014 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1009 19:04:56.979163  161014 command_runner.go:130] > # "nofile=1024:2048"
	I1009 19:04:56.979177  161014 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1009 19:04:56.979187  161014 command_runner.go:130] > # default_ulimits = [
	I1009 19:04:56.979193  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979206  161014 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1009 19:04:56.979215  161014 command_runner.go:130] > # no_pivot = false
	I1009 19:04:56.979226  161014 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1009 19:04:56.979239  161014 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1009 19:04:56.979251  161014 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1009 19:04:56.979259  161014 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1009 19:04:56.979267  161014 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1009 19:04:56.979277  161014 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 19:04:56.979283  161014 command_runner.go:130] > # conmon = ""
	I1009 19:04:56.979290  161014 command_runner.go:130] > # Cgroup setting for conmon
	I1009 19:04:56.979301  161014 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1009 19:04:56.979311  161014 command_runner.go:130] > conmon_cgroup = "pod"
	I1009 19:04:56.979320  161014 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1009 19:04:56.979327  161014 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1009 19:04:56.979338  161014 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 19:04:56.979347  161014 command_runner.go:130] > # conmon_env = [
	I1009 19:04:56.979353  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979364  161014 command_runner.go:130] > # Additional environment variables to set for all the
	I1009 19:04:56.979392  161014 command_runner.go:130] > # containers. These are overridden if set in the
	I1009 19:04:56.979406  161014 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1009 19:04:56.979412  161014 command_runner.go:130] > # default_env = [
	I1009 19:04:56.979420  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979429  161014 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1009 19:04:56.979443  161014 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1009 19:04:56.979453  161014 command_runner.go:130] > # selinux = false
	I1009 19:04:56.979463  161014 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1009 19:04:56.979479  161014 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1009 19:04:56.979489  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979497  161014 command_runner.go:130] > # seccomp_profile = ""
	I1009 19:04:56.979509  161014 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1009 19:04:56.979522  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979529  161014 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1009 19:04:56.979542  161014 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1009 19:04:56.979555  161014 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1009 19:04:56.979564  161014 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1009 19:04:56.979574  161014 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1009 19:04:56.979585  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979593  161014 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1009 19:04:56.979605  161014 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1009 19:04:56.979615  161014 command_runner.go:130] > # the cgroup blockio controller.
	I1009 19:04:56.979622  161014 command_runner.go:130] > # blockio_config_file = ""
	I1009 19:04:56.979636  161014 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1009 19:04:56.979642  161014 command_runner.go:130] > # blockio parameters.
	I1009 19:04:56.979648  161014 command_runner.go:130] > # blockio_reload = false
	I1009 19:04:56.979658  161014 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1009 19:04:56.979664  161014 command_runner.go:130] > # irqbalance daemon.
	I1009 19:04:56.979672  161014 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1009 19:04:56.979681  161014 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1009 19:04:56.979690  161014 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1009 19:04:56.979700  161014 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1009 19:04:56.979710  161014 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1009 19:04:56.979724  161014 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1009 19:04:56.979731  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.979741  161014 command_runner.go:130] > # rdt_config_file = ""
	I1009 19:04:56.979753  161014 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1009 19:04:56.979764  161014 command_runner.go:130] > # cgroup_manager = "systemd"
	I1009 19:04:56.979773  161014 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1009 19:04:56.979783  161014 command_runner.go:130] > # separate_pull_cgroup = ""
	I1009 19:04:56.979791  161014 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1009 19:04:56.979800  161014 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1009 19:04:56.979809  161014 command_runner.go:130] > # will be added.
	I1009 19:04:56.979817  161014 command_runner.go:130] > # default_capabilities = [
	I1009 19:04:56.979826  161014 command_runner.go:130] > # 	"CHOWN",
	I1009 19:04:56.979832  161014 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1009 19:04:56.979840  161014 command_runner.go:130] > # 	"FSETID",
	I1009 19:04:56.979846  161014 command_runner.go:130] > # 	"FOWNER",
	I1009 19:04:56.979855  161014 command_runner.go:130] > # 	"SETGID",
	I1009 19:04:56.979876  161014 command_runner.go:130] > # 	"SETUID",
	I1009 19:04:56.979885  161014 command_runner.go:130] > # 	"SETPCAP",
	I1009 19:04:56.979891  161014 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1009 19:04:56.979901  161014 command_runner.go:130] > # 	"KILL",
	I1009 19:04:56.979906  161014 command_runner.go:130] > # ]
	I1009 19:04:56.979920  161014 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1009 19:04:56.979930  161014 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1009 19:04:56.979950  161014 command_runner.go:130] > # add_inheritable_capabilities = false
	I1009 19:04:56.979963  161014 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1009 19:04:56.979972  161014 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 19:04:56.979977  161014 command_runner.go:130] > default_sysctls = [
	I1009 19:04:56.979993  161014 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1009 19:04:56.979997  161014 command_runner.go:130] > ]
	I1009 19:04:56.980003  161014 command_runner.go:130] > # List of devices on the host that a
	I1009 19:04:56.980010  161014 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1009 19:04:56.980015  161014 command_runner.go:130] > # allowed_devices = [
	I1009 19:04:56.980019  161014 command_runner.go:130] > # 	"/dev/fuse",
	I1009 19:04:56.980024  161014 command_runner.go:130] > # 	"/dev/net/tun",
	I1009 19:04:56.980029  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980035  161014 command_runner.go:130] > # List of additional devices. specified as
	I1009 19:04:56.980047  161014 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1009 19:04:56.980055  161014 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1009 19:04:56.980063  161014 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 19:04:56.980069  161014 command_runner.go:130] > # additional_devices = [
	I1009 19:04:56.980072  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980079  161014 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1009 19:04:56.980084  161014 command_runner.go:130] > # cdi_spec_dirs = [
	I1009 19:04:56.980091  161014 command_runner.go:130] > # 	"/etc/cdi",
	I1009 19:04:56.980097  161014 command_runner.go:130] > # 	"/var/run/cdi",
	I1009 19:04:56.980101  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980111  161014 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1009 19:04:56.980120  161014 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1009 19:04:56.980126  161014 command_runner.go:130] > # Defaults to false.
	I1009 19:04:56.980133  161014 command_runner.go:130] > # device_ownership_from_security_context = false
	I1009 19:04:56.980146  161014 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1009 19:04:56.980157  161014 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1009 19:04:56.980163  161014 command_runner.go:130] > # hooks_dir = [
	I1009 19:04:56.980167  161014 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1009 19:04:56.980173  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980179  161014 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1009 19:04:56.980187  161014 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1009 19:04:56.980192  161014 command_runner.go:130] > # its default mounts from the following two files:
	I1009 19:04:56.980197  161014 command_runner.go:130] > #
	I1009 19:04:56.980202  161014 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1009 19:04:56.980211  161014 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1009 19:04:56.980218  161014 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1009 19:04:56.980221  161014 command_runner.go:130] > #
	I1009 19:04:56.980230  161014 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1009 19:04:56.980236  161014 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1009 19:04:56.980244  161014 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1009 19:04:56.980252  161014 command_runner.go:130] > #      only add mounts it finds in this file.
	I1009 19:04:56.980255  161014 command_runner.go:130] > #
	I1009 19:04:56.980261  161014 command_runner.go:130] > # default_mounts_file = ""
	I1009 19:04:56.980266  161014 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1009 19:04:56.980275  161014 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1009 19:04:56.980281  161014 command_runner.go:130] > # pids_limit = -1
	I1009 19:04:56.980286  161014 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1009 19:04:56.980294  161014 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1009 19:04:56.980300  161014 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1009 19:04:56.980309  161014 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1009 19:04:56.980315  161014 command_runner.go:130] > # log_size_max = -1
	I1009 19:04:56.980322  161014 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1009 19:04:56.980328  161014 command_runner.go:130] > # log_to_journald = false
	I1009 19:04:56.980335  161014 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1009 19:04:56.980341  161014 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1009 19:04:56.980345  161014 command_runner.go:130] > # Path to directory for container attach sockets.
	I1009 19:04:56.980352  161014 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1009 19:04:56.980357  161014 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1009 19:04:56.980365  161014 command_runner.go:130] > # bind_mount_prefix = ""
	I1009 19:04:56.980370  161014 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1009 19:04:56.980376  161014 command_runner.go:130] > # read_only = false
	I1009 19:04:56.980395  161014 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1009 19:04:56.980405  161014 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1009 19:04:56.980413  161014 command_runner.go:130] > # live configuration reload.
	I1009 19:04:56.980417  161014 command_runner.go:130] > # log_level = "info"
	I1009 19:04:56.980425  161014 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1009 19:04:56.980430  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.980435  161014 command_runner.go:130] > # log_filter = ""
	I1009 19:04:56.980441  161014 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1009 19:04:56.980449  161014 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1009 19:04:56.980455  161014 command_runner.go:130] > # separated by comma.
	I1009 19:04:56.980462  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980467  161014 command_runner.go:130] > # uid_mappings = ""
	I1009 19:04:56.980473  161014 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1009 19:04:56.980480  161014 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1009 19:04:56.980486  161014 command_runner.go:130] > # separated by comma.
	I1009 19:04:56.980496  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980502  161014 command_runner.go:130] > # gid_mappings = ""
	I1009 19:04:56.980508  161014 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1009 19:04:56.980516  161014 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 19:04:56.980524  161014 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 19:04:56.980534  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980540  161014 command_runner.go:130] > # minimum_mappable_uid = -1
	I1009 19:04:56.980547  161014 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1009 19:04:56.980556  161014 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 19:04:56.980562  161014 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 19:04:56.980569  161014 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:04:56.980575  161014 command_runner.go:130] > # minimum_mappable_gid = -1
	I1009 19:04:56.980581  161014 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1009 19:04:56.980588  161014 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1009 19:04:56.980593  161014 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1009 19:04:56.980599  161014 command_runner.go:130] > # ctr_stop_timeout = 30
	I1009 19:04:56.980605  161014 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1009 19:04:56.980612  161014 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1009 19:04:56.980616  161014 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1009 19:04:56.980623  161014 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1009 19:04:56.980627  161014 command_runner.go:130] > # drop_infra_ctr = true
	I1009 19:04:56.980635  161014 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1009 19:04:56.980640  161014 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1009 19:04:56.980649  161014 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1009 19:04:56.980657  161014 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1009 19:04:56.980666  161014 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1009 19:04:56.980674  161014 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1009 19:04:56.980682  161014 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1009 19:04:56.980687  161014 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1009 19:04:56.980695  161014 command_runner.go:130] > # shared_cpuset = ""
	I1009 19:04:56.980703  161014 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1009 19:04:56.980707  161014 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1009 19:04:56.980712  161014 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1009 19:04:56.980719  161014 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1009 19:04:56.980725  161014 command_runner.go:130] > # pinns_path = ""
	I1009 19:04:56.980730  161014 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1009 19:04:56.980738  161014 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1009 19:04:56.980742  161014 command_runner.go:130] > # enable_criu_support = true
	I1009 19:04:56.980749  161014 command_runner.go:130] > # Enable/disable the generation of the container,
	I1009 19:04:56.980754  161014 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1009 19:04:56.980761  161014 command_runner.go:130] > # enable_pod_events = false
	I1009 19:04:56.980767  161014 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 19:04:56.980775  161014 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1009 19:04:56.980779  161014 command_runner.go:130] > # default_runtime = "crun"
	I1009 19:04:56.980785  161014 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1009 19:04:56.980792  161014 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1009 19:04:56.980803  161014 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1009 19:04:56.980809  161014 command_runner.go:130] > # creation as a file is not desired either.
	I1009 19:04:56.980817  161014 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1009 19:04:56.980823  161014 command_runner.go:130] > # the hostname is being managed dynamically.
	I1009 19:04:56.980828  161014 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1009 19:04:56.980831  161014 command_runner.go:130] > # ]
	I1009 19:04:56.980836  161014 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1009 19:04:56.980844  161014 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1009 19:04:56.980850  161014 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1009 19:04:56.980858  161014 command_runner.go:130] > # Each entry in the table should follow the format:
	I1009 19:04:56.980861  161014 command_runner.go:130] > #
	I1009 19:04:56.980865  161014 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1009 19:04:56.980872  161014 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1009 19:04:56.980875  161014 command_runner.go:130] > # runtime_type = "oci"
	I1009 19:04:56.980882  161014 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1009 19:04:56.980887  161014 command_runner.go:130] > # inherit_default_runtime = false
	I1009 19:04:56.980894  161014 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1009 19:04:56.980898  161014 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1009 19:04:56.980902  161014 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1009 19:04:56.980906  161014 command_runner.go:130] > # monitor_env = []
	I1009 19:04:56.980910  161014 command_runner.go:130] > # privileged_without_host_devices = false
	I1009 19:04:56.980917  161014 command_runner.go:130] > # allowed_annotations = []
	I1009 19:04:56.980922  161014 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1009 19:04:56.980928  161014 command_runner.go:130] > # no_sync_log = false
	I1009 19:04:56.980932  161014 command_runner.go:130] > # default_annotations = {}
	I1009 19:04:56.980939  161014 command_runner.go:130] > # stream_websockets = false
	I1009 19:04:56.980949  161014 command_runner.go:130] > # seccomp_profile = ""
	I1009 19:04:56.980985  161014 command_runner.go:130] > # Where:
	I1009 19:04:56.980994  161014 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1009 19:04:56.980999  161014 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1009 19:04:56.981005  161014 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1009 19:04:56.981010  161014 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1009 19:04:56.981014  161014 command_runner.go:130] > #   in $PATH.
	I1009 19:04:56.981020  161014 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1009 19:04:56.981024  161014 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1009 19:04:56.981032  161014 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1009 19:04:56.981035  161014 command_runner.go:130] > #   state.
	I1009 19:04:56.981041  161014 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1009 19:04:56.981049  161014 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1009 19:04:56.981054  161014 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1009 19:04:56.981063  161014 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1009 19:04:56.981067  161014 command_runner.go:130] > #   the values from the default runtime on load time.
	I1009 19:04:56.981078  161014 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1009 19:04:56.981086  161014 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1009 19:04:56.981092  161014 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1009 19:04:56.981100  161014 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1009 19:04:56.981105  161014 command_runner.go:130] > #   The currently recognized values are:
	I1009 19:04:56.981113  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1009 19:04:56.981123  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1009 19:04:56.981130  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1009 19:04:56.981135  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1009 19:04:56.981144  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1009 19:04:56.981153  161014 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1009 19:04:56.981161  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1009 19:04:56.981169  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1009 19:04:56.981177  161014 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1009 19:04:56.981183  161014 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1009 19:04:56.981191  161014 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1009 19:04:56.981199  161014 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1009 19:04:56.981204  161014 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1009 19:04:56.981213  161014 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1009 19:04:56.981221  161014 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1009 19:04:56.981227  161014 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1009 19:04:56.981235  161014 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1009 19:04:56.981239  161014 command_runner.go:130] > #   deprecated option "conmon".
	I1009 19:04:56.981248  161014 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1009 19:04:56.981255  161014 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1009 19:04:56.981261  161014 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1009 19:04:56.981268  161014 command_runner.go:130] > #   should be moved to the container's cgroup
	I1009 19:04:56.981273  161014 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1009 19:04:56.981280  161014 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1009 19:04:56.981287  161014 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1009 19:04:56.981293  161014 command_runner.go:130] > #   conmon-rs by using:
	I1009 19:04:56.981300  161014 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1009 19:04:56.981309  161014 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1009 19:04:56.981318  161014 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1009 19:04:56.981326  161014 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1009 19:04:56.981334  161014 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1009 19:04:56.981341  161014 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1009 19:04:56.981351  161014 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1009 19:04:56.981359  161014 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1009 19:04:56.981370  161014 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1009 19:04:56.981395  161014 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1009 19:04:56.981405  161014 command_runner.go:130] > #   when a machine crash happens.
	I1009 19:04:56.981411  161014 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1009 19:04:56.981421  161014 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1009 19:04:56.981431  161014 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1009 19:04:56.981437  161014 command_runner.go:130] > #   seccomp profile for the runtime.
	I1009 19:04:56.981443  161014 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1009 19:04:56.981452  161014 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1009 19:04:56.981455  161014 command_runner.go:130] > #
	I1009 19:04:56.981460  161014 command_runner.go:130] > # Using the seccomp notifier feature:
	I1009 19:04:56.981465  161014 command_runner.go:130] > #
	I1009 19:04:56.981472  161014 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1009 19:04:56.981480  161014 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1009 19:04:56.981483  161014 command_runner.go:130] > #
	I1009 19:04:56.981490  161014 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1009 19:04:56.981498  161014 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1009 19:04:56.981501  161014 command_runner.go:130] > #
	I1009 19:04:56.981507  161014 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1009 19:04:56.981512  161014 command_runner.go:130] > # feature.
	I1009 19:04:56.981515  161014 command_runner.go:130] > #
	I1009 19:04:56.981537  161014 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1009 19:04:56.981545  161014 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1009 19:04:56.981553  161014 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1009 19:04:56.981562  161014 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1009 19:04:56.981568  161014 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1009 19:04:56.981573  161014 command_runner.go:130] > #
	I1009 19:04:56.981579  161014 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1009 19:04:56.981587  161014 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1009 19:04:56.981590  161014 command_runner.go:130] > #
	I1009 19:04:56.981598  161014 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1009 19:04:56.981603  161014 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1009 19:04:56.981608  161014 command_runner.go:130] > #
	I1009 19:04:56.981614  161014 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1009 19:04:56.981622  161014 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1009 19:04:56.981628  161014 command_runner.go:130] > # limitation.
	I1009 19:04:56.981632  161014 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1009 19:04:56.981639  161014 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1009 19:04:56.981642  161014 command_runner.go:130] > runtime_type = ""
	I1009 19:04:56.981648  161014 command_runner.go:130] > runtime_root = "/run/crun"
	I1009 19:04:56.981652  161014 command_runner.go:130] > inherit_default_runtime = false
	I1009 19:04:56.981657  161014 command_runner.go:130] > runtime_config_path = ""
	I1009 19:04:56.981663  161014 command_runner.go:130] > container_min_memory = ""
	I1009 19:04:56.981667  161014 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 19:04:56.981673  161014 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 19:04:56.981677  161014 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 19:04:56.981683  161014 command_runner.go:130] > allowed_annotations = [
	I1009 19:04:56.981687  161014 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1009 19:04:56.981694  161014 command_runner.go:130] > ]
	I1009 19:04:56.981699  161014 command_runner.go:130] > privileged_without_host_devices = false
	I1009 19:04:56.981705  161014 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1009 19:04:56.981709  161014 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1009 19:04:56.981715  161014 command_runner.go:130] > runtime_type = ""
	I1009 19:04:56.981719  161014 command_runner.go:130] > runtime_root = "/run/runc"
	I1009 19:04:56.981725  161014 command_runner.go:130] > inherit_default_runtime = false
	I1009 19:04:56.981729  161014 command_runner.go:130] > runtime_config_path = ""
	I1009 19:04:56.981735  161014 command_runner.go:130] > container_min_memory = ""
	I1009 19:04:56.981739  161014 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 19:04:56.981744  161014 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 19:04:56.981750  161014 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 19:04:56.981754  161014 command_runner.go:130] > privileged_without_host_devices = false
	I1009 19:04:56.981761  161014 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1009 19:04:56.981769  161014 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1009 19:04:56.981774  161014 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1009 19:04:56.981783  161014 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1009 19:04:56.981795  161014 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1009 19:04:56.981807  161014 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1009 19:04:56.981815  161014 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1009 19:04:56.981823  161014 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1009 19:04:56.981831  161014 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1009 19:04:56.981840  161014 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1009 19:04:56.981848  161014 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1009 19:04:56.981854  161014 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1009 19:04:56.981859  161014 command_runner.go:130] > # Example:
	I1009 19:04:56.981864  161014 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1009 19:04:56.981871  161014 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1009 19:04:56.981875  161014 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1009 19:04:56.981884  161014 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1009 19:04:56.981899  161014 command_runner.go:130] > # cpuset = "0-1"
	I1009 19:04:56.981905  161014 command_runner.go:130] > # cpushares = "5"
	I1009 19:04:56.981909  161014 command_runner.go:130] > # cpuquota = "1000"
	I1009 19:04:56.981912  161014 command_runner.go:130] > # cpuperiod = "100000"
	I1009 19:04:56.981920  161014 command_runner.go:130] > # cpulimit = "35"
	I1009 19:04:56.981926  161014 command_runner.go:130] > # Where:
	I1009 19:04:56.981936  161014 command_runner.go:130] > # The workload name is workload-type.
	I1009 19:04:56.981948  161014 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1009 19:04:56.981955  161014 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1009 19:04:56.981962  161014 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1009 19:04:56.981971  161014 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1009 19:04:56.981979  161014 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1009 19:04:56.981984  161014 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1009 19:04:56.981993  161014 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1009 19:04:56.981997  161014 command_runner.go:130] > # Default value is set to true
	I1009 19:04:56.982003  161014 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1009 19:04:56.982009  161014 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1009 19:04:56.982013  161014 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1009 19:04:56.982017  161014 command_runner.go:130] > # Default value is set to 'false'
	I1009 19:04:56.982020  161014 command_runner.go:130] > # disable_hostport_mapping = false
	I1009 19:04:56.982025  161014 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1009 19:04:56.982034  161014 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1009 19:04:56.982039  161014 command_runner.go:130] > # timezone = ""
	I1009 19:04:56.982045  161014 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1009 19:04:56.982050  161014 command_runner.go:130] > #
	I1009 19:04:56.982056  161014 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1009 19:04:56.982064  161014 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1009 19:04:56.982067  161014 command_runner.go:130] > [crio.image]
	I1009 19:04:56.982072  161014 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1009 19:04:56.982080  161014 command_runner.go:130] > # default_transport = "docker://"
	I1009 19:04:56.982085  161014 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1009 19:04:56.982093  161014 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1009 19:04:56.982100  161014 command_runner.go:130] > # global_auth_file = ""
	I1009 19:04:56.982105  161014 command_runner.go:130] > # The image used to instantiate infra containers.
	I1009 19:04:56.982112  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.982116  161014 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1009 19:04:56.982124  161014 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1009 19:04:56.982132  161014 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1009 19:04:56.982137  161014 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:04:56.982143  161014 command_runner.go:130] > # pause_image_auth_file = ""
	I1009 19:04:56.982148  161014 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1009 19:04:56.982156  161014 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1009 19:04:56.982162  161014 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1009 19:04:56.982170  161014 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1009 19:04:56.982173  161014 command_runner.go:130] > # pause_command = "/pause"
	I1009 19:04:56.982178  161014 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1009 19:04:56.982186  161014 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1009 19:04:56.982191  161014 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1009 19:04:56.982199  161014 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1009 19:04:56.982204  161014 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1009 19:04:56.982213  161014 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1009 19:04:56.982219  161014 command_runner.go:130] > # pinned_images = [
	I1009 19:04:56.982222  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982227  161014 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1009 19:04:56.982235  161014 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1009 19:04:56.982241  161014 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1009 19:04:56.982248  161014 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1009 19:04:56.982253  161014 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1009 19:04:56.982260  161014 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1009 19:04:56.982265  161014 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1009 19:04:56.982274  161014 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1009 19:04:56.982282  161014 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1009 19:04:56.982287  161014 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1009 19:04:56.982295  161014 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1009 19:04:56.982302  161014 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1009 19:04:56.982307  161014 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1009 19:04:56.982316  161014 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1009 19:04:56.982322  161014 command_runner.go:130] > # changing them here.
	I1009 19:04:56.982327  161014 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1009 19:04:56.982333  161014 command_runner.go:130] > # insecure_registries = [
	I1009 19:04:56.982336  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982342  161014 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1009 19:04:56.982352  161014 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1009 19:04:56.982359  161014 command_runner.go:130] > # image_volumes = "mkdir"
	I1009 19:04:56.982364  161014 command_runner.go:130] > # Temporary directory to use for storing big files
	I1009 19:04:56.982370  161014 command_runner.go:130] > # big_files_temporary_dir = ""
	I1009 19:04:56.982385  161014 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1009 19:04:56.982398  161014 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1009 19:04:56.982403  161014 command_runner.go:130] > # auto_reload_registries = false
	I1009 19:04:56.982412  161014 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1009 19:04:56.982419  161014 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1009 19:04:56.982427  161014 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1009 19:04:56.982431  161014 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1009 19:04:56.982435  161014 command_runner.go:130] > # The mode of short name resolution.
	I1009 19:04:56.982441  161014 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1009 19:04:56.982450  161014 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1009 19:04:56.982455  161014 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1009 19:04:56.982460  161014 command_runner.go:130] > # short_name_mode = "enforcing"
	I1009 19:04:56.982465  161014 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1009 19:04:56.982472  161014 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1009 19:04:56.982476  161014 command_runner.go:130] > # oci_artifact_mount_support = true
	I1009 19:04:56.982484  161014 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1009 19:04:56.982487  161014 command_runner.go:130] > # CNI plugins.
	I1009 19:04:56.982490  161014 command_runner.go:130] > [crio.network]
	I1009 19:04:56.982496  161014 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1009 19:04:56.982501  161014 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1009 19:04:56.982507  161014 command_runner.go:130] > # cni_default_network = ""
	I1009 19:04:56.982512  161014 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1009 19:04:56.982519  161014 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1009 19:04:56.982524  161014 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1009 19:04:56.982530  161014 command_runner.go:130] > # plugin_dirs = [
	I1009 19:04:56.982533  161014 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1009 19:04:56.982536  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982540  161014 command_runner.go:130] > # List of included pod metrics.
	I1009 19:04:56.982544  161014 command_runner.go:130] > # included_pod_metrics = [
	I1009 19:04:56.982547  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982552  161014 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1009 19:04:56.982558  161014 command_runner.go:130] > [crio.metrics]
	I1009 19:04:56.982562  161014 command_runner.go:130] > # Globally enable or disable metrics support.
	I1009 19:04:56.982566  161014 command_runner.go:130] > # enable_metrics = false
	I1009 19:04:56.982570  161014 command_runner.go:130] > # Specify enabled metrics collectors.
	I1009 19:04:56.982574  161014 command_runner.go:130] > # Per default all metrics are enabled.
	I1009 19:04:56.982579  161014 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1009 19:04:56.982588  161014 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1009 19:04:56.982593  161014 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1009 19:04:56.982598  161014 command_runner.go:130] > # metrics_collectors = [
	I1009 19:04:56.982602  161014 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1009 19:04:56.982607  161014 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1009 19:04:56.982610  161014 command_runner.go:130] > # 	"containers_oom_total",
	I1009 19:04:56.982614  161014 command_runner.go:130] > # 	"processes_defunct",
	I1009 19:04:56.982617  161014 command_runner.go:130] > # 	"operations_total",
	I1009 19:04:56.982621  161014 command_runner.go:130] > # 	"operations_latency_seconds",
	I1009 19:04:56.982625  161014 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1009 19:04:56.982629  161014 command_runner.go:130] > # 	"operations_errors_total",
	I1009 19:04:56.982632  161014 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1009 19:04:56.982636  161014 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1009 19:04:56.982640  161014 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1009 19:04:56.982643  161014 command_runner.go:130] > # 	"image_pulls_success_total",
	I1009 19:04:56.982648  161014 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1009 19:04:56.982652  161014 command_runner.go:130] > # 	"containers_oom_count_total",
	I1009 19:04:56.982656  161014 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1009 19:04:56.982660  161014 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1009 19:04:56.982664  161014 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1009 19:04:56.982667  161014 command_runner.go:130] > # ]
	I1009 19:04:56.982672  161014 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1009 19:04:56.982675  161014 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1009 19:04:56.982680  161014 command_runner.go:130] > # The port on which the metrics server will listen.
	I1009 19:04:56.982683  161014 command_runner.go:130] > # metrics_port = 9090
	I1009 19:04:56.982689  161014 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1009 19:04:56.982693  161014 command_runner.go:130] > # metrics_socket = ""
	I1009 19:04:56.982698  161014 command_runner.go:130] > # The certificate for the secure metrics server.
	I1009 19:04:56.982706  161014 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1009 19:04:56.982712  161014 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1009 19:04:56.982718  161014 command_runner.go:130] > # certificate on any modification event.
	I1009 19:04:56.982722  161014 command_runner.go:130] > # metrics_cert = ""
	I1009 19:04:56.982735  161014 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1009 19:04:56.982741  161014 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1009 19:04:56.982746  161014 command_runner.go:130] > # metrics_key = ""
	I1009 19:04:56.982753  161014 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1009 19:04:56.982758  161014 command_runner.go:130] > [crio.tracing]
	I1009 19:04:56.982766  161014 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1009 19:04:56.982771  161014 command_runner.go:130] > # enable_tracing = false
	I1009 19:04:56.982779  161014 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1009 19:04:56.982788  161014 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1009 19:04:56.982798  161014 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1009 19:04:56.982809  161014 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1009 19:04:56.982818  161014 command_runner.go:130] > # CRI-O NRI configuration.
	I1009 19:04:56.982821  161014 command_runner.go:130] > [crio.nri]
	I1009 19:04:56.982825  161014 command_runner.go:130] > # Globally enable or disable NRI.
	I1009 19:04:56.982832  161014 command_runner.go:130] > # enable_nri = true
	I1009 19:04:56.982836  161014 command_runner.go:130] > # NRI socket to listen on.
	I1009 19:04:56.982842  161014 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1009 19:04:56.982846  161014 command_runner.go:130] > # NRI plugin directory to use.
	I1009 19:04:56.982851  161014 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1009 19:04:56.982856  161014 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1009 19:04:56.982863  161014 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1009 19:04:56.982868  161014 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1009 19:04:56.982900  161014 command_runner.go:130] > # nri_disable_connections = false
	I1009 19:04:56.982908  161014 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1009 19:04:56.982912  161014 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1009 19:04:56.982916  161014 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1009 19:04:56.982920  161014 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1009 19:04:56.982926  161014 command_runner.go:130] > # NRI default validator configuration.
	I1009 19:04:56.982933  161014 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1009 19:04:56.982946  161014 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1009 19:04:56.982953  161014 command_runner.go:130] > # can be restricted/rejected:
	I1009 19:04:56.982956  161014 command_runner.go:130] > # - OCI hook injection
	I1009 19:04:56.982961  161014 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1009 19:04:56.982969  161014 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1009 19:04:56.982974  161014 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1009 19:04:56.982982  161014 command_runner.go:130] > # - adjustment of linux namespaces
	I1009 19:04:56.982988  161014 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1009 19:04:56.982996  161014 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1009 19:04:56.983002  161014 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1009 19:04:56.983007  161014 command_runner.go:130] > #
	I1009 19:04:56.983011  161014 command_runner.go:130] > # [crio.nri.default_validator]
	I1009 19:04:56.983015  161014 command_runner.go:130] > # nri_enable_default_validator = false
	I1009 19:04:56.983020  161014 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1009 19:04:56.983027  161014 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1009 19:04:56.983032  161014 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1009 19:04:56.983039  161014 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1009 19:04:56.983044  161014 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1009 19:04:56.983050  161014 command_runner.go:130] > # nri_validator_required_plugins = [
	I1009 19:04:56.983053  161014 command_runner.go:130] > # ]
	I1009 19:04:56.983058  161014 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1009 19:04:56.983066  161014 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1009 19:04:56.983069  161014 command_runner.go:130] > [crio.stats]
	I1009 19:04:56.983074  161014 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1009 19:04:56.983087  161014 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1009 19:04:56.983092  161014 command_runner.go:130] > # stats_collection_period = 0
	I1009 19:04:56.983097  161014 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1009 19:04:56.983106  161014 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1009 19:04:56.983109  161014 command_runner.go:130] > # collection_period = 0
	I1009 19:04:56.983133  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961902946Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1009 19:04:56.983143  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961928249Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1009 19:04:56.983151  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961952575Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1009 19:04:56.983160  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.961969788Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1009 19:04:56.983168  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.962036562Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:04:56.983178  161014 command_runner.go:130] ! time="2025-10-09T19:04:56.96221376Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1009 19:04:56.983187  161014 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1009 19:04:56.983250  161014 cni.go:84] Creating CNI manager for ""
	I1009 19:04:56.983259  161014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:04:56.983280  161014 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:04:56.983306  161014 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-158523 NodeName:functional-158523 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:04:56.983442  161014 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-158523"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:04:56.983504  161014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:04:56.992256  161014 command_runner.go:130] > kubeadm
	I1009 19:04:56.992278  161014 command_runner.go:130] > kubectl
	I1009 19:04:56.992282  161014 command_runner.go:130] > kubelet
	I1009 19:04:56.992304  161014 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:04:56.992347  161014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:04:57.000522  161014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 19:04:57.013113  161014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:04:57.026211  161014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1009 19:04:57.038776  161014 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:04:57.042573  161014 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1009 19:04:57.042649  161014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:04:57.130268  161014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:04:57.143785  161014 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523 for IP: 192.168.49.2
	I1009 19:04:57.143808  161014 certs.go:195] generating shared ca certs ...
	I1009 19:04:57.143829  161014 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.144031  161014 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:04:57.144072  161014 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:04:57.144082  161014 certs.go:257] generating profile certs ...
	I1009 19:04:57.144182  161014 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key
	I1009 19:04:57.144224  161014 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key.1809350a
	I1009 19:04:57.144260  161014 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key
	I1009 19:04:57.144272  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:04:57.144283  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:04:57.144293  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:04:57.144302  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:04:57.144314  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:04:57.144325  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:04:57.144336  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:04:57.144348  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:04:57.144426  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:04:57.144461  161014 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:04:57.144470  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:04:57.144493  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:04:57.144516  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:04:57.144537  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:04:57.144579  161014 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:04:57.144605  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.144619  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.144631  161014 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.145144  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:04:57.163977  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:04:57.182180  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:04:57.200741  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:04:57.219086  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:04:57.236775  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:04:57.254529  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:04:57.272276  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:04:57.290804  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:04:57.309893  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:04:57.327963  161014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:04:57.345810  161014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:04:57.359185  161014 ssh_runner.go:195] Run: openssl version
	I1009 19:04:57.366137  161014 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1009 19:04:57.366338  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:04:57.375985  161014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.380041  161014 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.380082  161014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.380117  161014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:04:57.415315  161014 command_runner.go:130] > b5213941
	I1009 19:04:57.415413  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:04:57.424315  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:04:57.433300  161014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.437553  161014 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.437594  161014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.437635  161014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:04:57.472859  161014 command_runner.go:130] > 51391683
	I1009 19:04:57.473177  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:04:57.481800  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:04:57.490997  161014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.494992  161014 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.495040  161014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.495095  161014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:04:57.529155  161014 command_runner.go:130] > 3ec20f2e
	I1009 19:04:57.529240  161014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:04:57.537710  161014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:04:57.541624  161014 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:04:57.541645  161014 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1009 19:04:57.541653  161014 command_runner.go:130] > Device: 8,1	Inode: 573939      Links: 1
	I1009 19:04:57.541662  161014 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 19:04:57.541679  161014 command_runner.go:130] > Access: 2025-10-09 19:00:49.271404553 +0000
	I1009 19:04:57.541690  161014 command_runner.go:130] > Modify: 2025-10-09 18:56:44.405307509 +0000
	I1009 19:04:57.541704  161014 command_runner.go:130] > Change: 2025-10-09 18:56:44.405307509 +0000
	I1009 19:04:57.541714  161014 command_runner.go:130] >  Birth: 2025-10-09 18:56:44.405307509 +0000
	I1009 19:04:57.541773  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:04:57.576034  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.576418  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:04:57.610746  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.611106  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:04:57.645558  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.645650  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:04:57.680926  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.681269  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:04:57.716681  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.716965  161014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:04:57.752444  161014 command_runner.go:130] > Certificate will not expire
	I1009 19:04:57.752733  161014 kubeadm.go:400] StartCluster: {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:04:57.752827  161014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:04:57.752877  161014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:04:57.781930  161014 cri.go:89] found id: ""
	I1009 19:04:57.782002  161014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:04:57.790396  161014 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1009 19:04:57.790421  161014 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1009 19:04:57.790427  161014 command_runner.go:130] > /var/lib/minikube/etcd:
	I1009 19:04:57.790446  161014 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:04:57.790453  161014 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:04:57.790499  161014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:04:57.798150  161014 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:04:57.798252  161014 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-158523" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.798307  161014 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-137890/kubeconfig needs updating (will repair): [kubeconfig missing "functional-158523" cluster setting kubeconfig missing "functional-158523" context setting]
	I1009 19:04:57.798648  161014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.799428  161014 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.799625  161014 kapi.go:59] client config for functional-158523: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:04:57.800169  161014 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:04:57.800185  161014 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:04:57.800191  161014 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:04:57.800195  161014 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:04:57.800199  161014 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:04:57.800257  161014 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:04:57.800663  161014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:04:57.808677  161014 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:04:57.808712  161014 kubeadm.go:601] duration metric: took 18.25382ms to restartPrimaryControlPlane
	I1009 19:04:57.808720  161014 kubeadm.go:402] duration metric: took 56.001565ms to StartCluster
	I1009 19:04:57.808736  161014 settings.go:142] acquiring lock: {Name:mk34466033fb866bc8e167d2d953624ad0802283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.808837  161014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.809418  161014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:04:57.809652  161014 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:04:57.809720  161014 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:04:57.809869  161014 addons.go:69] Setting storage-provisioner=true in profile "functional-158523"
	I1009 19:04:57.809882  161014 addons.go:69] Setting default-storageclass=true in profile "functional-158523"
	I1009 19:04:57.809890  161014 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:57.809907  161014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-158523"
	I1009 19:04:57.809888  161014 addons.go:238] Setting addon storage-provisioner=true in "functional-158523"
	I1009 19:04:57.809999  161014 host.go:66] Checking if "functional-158523" exists ...
	I1009 19:04:57.810265  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:57.810325  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:57.815899  161014 out.go:179] * Verifying Kubernetes components...
	I1009 19:04:57.817259  161014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:04:57.830319  161014 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:04:57.830565  161014 kapi.go:59] client config for functional-158523: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:04:57.830893  161014 addons.go:238] Setting addon default-storageclass=true in "functional-158523"
	I1009 19:04:57.830936  161014 host.go:66] Checking if "functional-158523" exists ...
	I1009 19:04:57.831444  161014 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:04:57.831697  161014 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:04:57.833512  161014 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:57.833530  161014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:04:57.833580  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:57.856284  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:57.858504  161014 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:57.858545  161014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:04:57.858618  161014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:04:57.879618  161014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:04:57.916522  161014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:04:57.930660  161014 node_ready.go:35] waiting up to 6m0s for node "functional-158523" to be "Ready" ...
	I1009 19:04:57.930861  161014 type.go:168] "Request Body" body=""
	I1009 19:04:57.930944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:57.931232  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:57.969596  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:57.988544  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:58.026986  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.027037  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.027061  161014 retry.go:31] will retry after 164.488016ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.047051  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.047098  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.047116  161014 retry.go:31] will retry after 194.483244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.192480  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:58.242329  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:58.247629  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.247684  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.247711  161014 retry.go:31] will retry after 217.861079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.297775  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.297841  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.297866  161014 retry.go:31] will retry after 198.924996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.431075  161014 type.go:168] "Request Body" body=""
	I1009 19:04:58.431155  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:58.431537  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:58.466794  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:58.497509  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:58.521187  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.524476  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.524506  161014 retry.go:31] will retry after 579.961825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.549062  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:58.552103  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.552134  161014 retry.go:31] will retry after 574.521259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:58.930944  161014 type.go:168] "Request Body" body=""
	I1009 19:04:58.931032  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:58.931452  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:59.104703  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:59.127368  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:04:59.161080  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:59.161136  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.161156  161014 retry.go:31] will retry after 734.839127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.184025  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:59.184076  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.184098  161014 retry.go:31] will retry after 1.025268007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.431572  161014 type.go:168] "Request Body" body=""
	I1009 19:04:59.431684  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:59.432074  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:04:59.896539  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:04:59.931433  161014 type.go:168] "Request Body" body=""
	I1009 19:04:59.931506  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:04:59.931837  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:04:59.931910  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:04:59.949186  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:59.952452  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:04:59.952481  161014 retry.go:31] will retry after 1.084602838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:00.209882  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:00.262148  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:00.265292  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:00.265336  161014 retry.go:31] will retry after 1.287073207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:00.431721  161014 type.go:168] "Request Body" body=""
	I1009 19:05:00.431804  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:00.432145  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:00.931797  161014 type.go:168] "Request Body" body=""
	I1009 19:05:00.931880  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:00.932240  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:01.037525  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:01.094236  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:01.094283  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.094304  161014 retry.go:31] will retry after 1.546934371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.431777  161014 type.go:168] "Request Body" body=""
	I1009 19:05:01.431854  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:01.432251  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:01.553547  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:01.609996  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:01.610065  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.610089  161014 retry.go:31] will retry after 1.923829662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:01.931551  161014 type.go:168] "Request Body" body=""
	I1009 19:05:01.931629  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:01.931969  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:01.932040  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:02.431907  161014 type.go:168] "Request Body" body=""
	I1009 19:05:02.431987  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:02.432358  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:02.641614  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:02.696762  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:02.699844  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:02.699873  161014 retry.go:31] will retry after 2.36633365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:02.931226  161014 type.go:168] "Request Body" body=""
	I1009 19:05:02.931306  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:02.931737  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:03.431598  161014 type.go:168] "Request Body" body=""
	I1009 19:05:03.431682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:03.432054  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:03.534329  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:03.590565  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:03.590611  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:03.590631  161014 retry.go:31] will retry after 1.952860092s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:03.931329  161014 type.go:168] "Request Body" body=""
	I1009 19:05:03.931427  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:03.931807  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:04.431531  161014 type.go:168] "Request Body" body=""
	I1009 19:05:04.431620  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:04.432004  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:04.432087  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:04.931914  161014 type.go:168] "Request Body" body=""
	I1009 19:05:04.931993  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:04.932341  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:05.066624  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:05.119719  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:05.123044  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.123086  161014 retry.go:31] will retry after 6.108852521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.431602  161014 type.go:168] "Request Body" body=""
	I1009 19:05:05.431719  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:05.432158  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:05.544481  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:05.597312  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:05.600803  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.600837  161014 retry.go:31] will retry after 3.364758217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:05.931296  161014 type.go:168] "Request Body" body=""
	I1009 19:05:05.931418  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:05.931808  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:06.431397  161014 type.go:168] "Request Body" body=""
	I1009 19:05:06.431479  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:06.431873  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:06.931533  161014 type.go:168] "Request Body" body=""
	I1009 19:05:06.931626  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:06.932024  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:06.932104  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:07.431687  161014 type.go:168] "Request Body" body=""
	I1009 19:05:07.431779  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:07.432140  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:07.930948  161014 type.go:168] "Request Body" body=""
	I1009 19:05:07.931031  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:07.931436  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:08.431020  161014 type.go:168] "Request Body" body=""
	I1009 19:05:08.431105  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:08.431489  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:08.931423  161014 type.go:168] "Request Body" body=""
	I1009 19:05:08.931528  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:08.931995  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:08.966195  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:09.019582  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:09.022605  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:09.022645  161014 retry.go:31] will retry after 7.771885559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:09.431187  161014 type.go:168] "Request Body" body=""
	I1009 19:05:09.431265  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:09.431662  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:09.431745  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:09.931552  161014 type.go:168] "Request Body" body=""
	I1009 19:05:09.931635  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:09.931979  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:10.431855  161014 type.go:168] "Request Body" body=""
	I1009 19:05:10.431945  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:10.432274  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:10.931110  161014 type.go:168] "Request Body" body=""
	I1009 19:05:10.931203  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:10.931572  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:11.233030  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:11.288902  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:11.288953  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:11.288975  161014 retry.go:31] will retry after 3.345246752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:11.431308  161014 type.go:168] "Request Body" body=""
	I1009 19:05:11.431402  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:11.431749  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:11.431819  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:11.931642  161014 type.go:168] "Request Body" body=""
	I1009 19:05:11.931749  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:11.932113  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:12.430947  161014 type.go:168] "Request Body" body=""
	I1009 19:05:12.431034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:12.431445  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:12.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:05:12.931315  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:12.931711  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:13.431639  161014 type.go:168] "Request Body" body=""
	I1009 19:05:13.431724  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:13.432088  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:13.432151  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:13.930962  161014 type.go:168] "Request Body" body=""
	I1009 19:05:13.931048  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:13.931440  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:14.431210  161014 type.go:168] "Request Body" body=""
	I1009 19:05:14.431296  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:14.431700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:14.635101  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:14.689463  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:14.692943  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:14.692988  161014 retry.go:31] will retry after 8.426490786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:14.931454  161014 type.go:168] "Request Body" body=""
	I1009 19:05:14.931531  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:14.931912  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:15.431649  161014 type.go:168] "Request Body" body=""
	I1009 19:05:15.431767  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:15.432139  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:15.432244  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:15.931808  161014 type.go:168] "Request Body" body=""
	I1009 19:05:15.931885  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:15.932226  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:16.430935  161014 type.go:168] "Request Body" body=""
	I1009 19:05:16.431026  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:16.431417  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:16.794854  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:16.849041  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:16.852200  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:16.852234  161014 retry.go:31] will retry after 11.902123756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:16.931535  161014 type.go:168] "Request Body" body=""
	I1009 19:05:16.931634  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:16.931986  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:17.431870  161014 type.go:168] "Request Body" body=""
	I1009 19:05:17.431977  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:17.432410  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:17.432479  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:17.931194  161014 type.go:168] "Request Body" body=""
	I1009 19:05:17.931301  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:17.931659  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:18.431314  161014 type.go:168] "Request Body" body=""
	I1009 19:05:18.431420  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:18.431851  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:18.931802  161014 type.go:168] "Request Body" body=""
	I1009 19:05:18.931891  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:18.932247  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:19.431889  161014 type.go:168] "Request Body" body=""
	I1009 19:05:19.431978  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:19.432365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:19.930982  161014 type.go:168] "Request Body" body=""
	I1009 19:05:19.931070  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:19.931472  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:19.931543  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:20.431080  161014 type.go:168] "Request Body" body=""
	I1009 19:05:20.431159  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:20.431505  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:20.931084  161014 type.go:168] "Request Body" body=""
	I1009 19:05:20.931162  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:20.931465  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:21.431126  161014 type.go:168] "Request Body" body=""
	I1009 19:05:21.431210  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:21.431583  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:21.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:05:21.931310  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:21.931673  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:21.931757  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:22.431250  161014 type.go:168] "Request Body" body=""
	I1009 19:05:22.431335  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:22.431706  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:22.931281  161014 type.go:168] "Request Body" body=""
	I1009 19:05:22.931373  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:22.931764  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:23.120080  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:23.178288  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:23.178344  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:23.178369  161014 retry.go:31] will retry after 12.554942652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:23.431791  161014 type.go:168] "Request Body" body=""
	I1009 19:05:23.431875  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:23.432227  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:23.931667  161014 type.go:168] "Request Body" body=""
	I1009 19:05:23.931758  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:23.932103  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:23.932167  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:24.431140  161014 type.go:168] "Request Body" body=""
	I1009 19:05:24.431223  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:24.431607  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:24.931219  161014 type.go:168] "Request Body" body=""
	I1009 19:05:24.931297  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:24.931656  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:25.431282  161014 type.go:168] "Request Body" body=""
	I1009 19:05:25.431369  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:25.431754  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:25.931371  161014 type.go:168] "Request Body" body=""
	I1009 19:05:25.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:25.931852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:26.431721  161014 type.go:168] "Request Body" body=""
	I1009 19:05:26.431805  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:26.432173  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:26.432243  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:26.931895  161014 type.go:168] "Request Body" body=""
	I1009 19:05:26.931982  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:26.932327  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:27.430978  161014 type.go:168] "Request Body" body=""
	I1009 19:05:27.431069  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:27.431440  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:27.931122  161014 type.go:168] "Request Body" body=""
	I1009 19:05:27.931203  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:27.931568  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:28.431156  161014 type.go:168] "Request Body" body=""
	I1009 19:05:28.431240  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:28.431629  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:28.755128  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:28.809181  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:28.812331  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:28.812369  161014 retry.go:31] will retry after 17.899546939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:28.931943  161014 type.go:168] "Request Body" body=""
	I1009 19:05:28.932042  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:28.932423  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:28.932495  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:29.431031  161014 type.go:168] "Request Body" body=""
	I1009 19:05:29.431117  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:29.431488  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:29.931112  161014 type.go:168] "Request Body" body=""
	I1009 19:05:29.931191  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:29.931549  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:30.431108  161014 type.go:168] "Request Body" body=""
	I1009 19:05:30.431184  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:30.431580  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:30.931234  161014 type.go:168] "Request Body" body=""
	I1009 19:05:30.931310  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:30.931701  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:31.431358  161014 type.go:168] "Request Body" body=""
	I1009 19:05:31.431464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:31.431883  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:31.431968  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:31.931551  161014 type.go:168] "Request Body" body=""
	I1009 19:05:31.931654  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:31.932150  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:32.431843  161014 type.go:168] "Request Body" body=""
	I1009 19:05:32.431928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:32.432325  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:32.930923  161014 type.go:168] "Request Body" body=""
	I1009 19:05:32.931009  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:32.931419  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:33.431049  161014 type.go:168] "Request Body" body=""
	I1009 19:05:33.431139  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:33.431539  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:33.931442  161014 type.go:168] "Request Body" body=""
	I1009 19:05:33.931529  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:33.931921  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:33.931994  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:34.431615  161014 type.go:168] "Request Body" body=""
	I1009 19:05:34.431709  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:34.432084  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:34.931772  161014 type.go:168] "Request Body" body=""
	I1009 19:05:34.931853  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:34.932239  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:35.431990  161014 type.go:168] "Request Body" body=""
	I1009 19:05:35.432083  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:35.432473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:35.733912  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:05:35.787306  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:35.790843  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:35.790879  161014 retry.go:31] will retry after 31.721699669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:35.931334  161014 type.go:168] "Request Body" body=""
	I1009 19:05:35.931474  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:35.931860  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:36.431788  161014 type.go:168] "Request Body" body=""
	I1009 19:05:36.431878  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:36.432242  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:36.432309  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:36.931065  161014 type.go:168] "Request Body" body=""
	I1009 19:05:36.931156  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:36.931540  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:37.431314  161014 type.go:168] "Request Body" body=""
	I1009 19:05:37.431439  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:37.431797  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:37.931603  161014 type.go:168] "Request Body" body=""
	I1009 19:05:37.931697  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:37.932081  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:38.431682  161014 type.go:168] "Request Body" body=""
	I1009 19:05:38.431775  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:38.432127  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:38.930953  161014 type.go:168] "Request Body" body=""
	I1009 19:05:38.931049  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:38.931414  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:38.931498  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:39.430956  161014 type.go:168] "Request Body" body=""
	I1009 19:05:39.431070  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:39.431453  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:39.931034  161014 type.go:168] "Request Body" body=""
	I1009 19:05:39.931145  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:39.931490  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:40.431075  161014 type.go:168] "Request Body" body=""
	I1009 19:05:40.431166  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:40.431582  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:40.931249  161014 type.go:168] "Request Body" body=""
	I1009 19:05:40.931336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:40.931693  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:40.931756  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:41.431331  161014 type.go:168] "Request Body" body=""
	I1009 19:05:41.431437  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:41.431805  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:41.931445  161014 type.go:168] "Request Body" body=""
	I1009 19:05:41.931535  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:41.931928  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:42.431625  161014 type.go:168] "Request Body" body=""
	I1009 19:05:42.431705  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:42.432064  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:42.931717  161014 type.go:168] "Request Body" body=""
	I1009 19:05:42.931803  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:42.932175  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:42.932247  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:43.430857  161014 type.go:168] "Request Body" body=""
	I1009 19:05:43.430971  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:43.431317  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:43.931149  161014 type.go:168] "Request Body" body=""
	I1009 19:05:43.931232  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:43.931588  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:44.431181  161014 type.go:168] "Request Body" body=""
	I1009 19:05:44.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:44.431639  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:44.931222  161014 type.go:168] "Request Body" body=""
	I1009 19:05:44.931306  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:44.931692  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:45.431277  161014 type.go:168] "Request Body" body=""
	I1009 19:05:45.431360  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:45.431736  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:45.431802  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:45.931357  161014 type.go:168] "Request Body" body=""
	I1009 19:05:45.931462  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:45.931838  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:46.431506  161014 type.go:168] "Request Body" body=""
	I1009 19:05:46.431593  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:46.431956  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:46.712449  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:05:46.768626  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:05:46.768679  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:46.768704  161014 retry.go:31] will retry after 25.41172348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:05:46.930938  161014 type.go:168] "Request Body" body=""
	I1009 19:05:46.931055  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:46.931460  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:47.431076  161014 type.go:168] "Request Body" body=""
	I1009 19:05:47.431153  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:47.431556  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:47.931415  161014 type.go:168] "Request Body" body=""
	I1009 19:05:47.931510  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:47.931879  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:47.931959  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:48.431674  161014 type.go:168] "Request Body" body=""
	I1009 19:05:48.431759  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:48.432094  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:48.930896  161014 type.go:168] "Request Body" body=""
	I1009 19:05:48.931001  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:48.931373  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:49.430996  161014 type.go:168] "Request Body" body=""
	I1009 19:05:49.431076  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:49.431465  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:49.931262  161014 type.go:168] "Request Body" body=""
	I1009 19:05:49.931370  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:49.931789  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:50.431699  161014 type.go:168] "Request Body" body=""
	I1009 19:05:50.431782  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:50.432126  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:50.432204  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:50.930957  161014 type.go:168] "Request Body" body=""
	I1009 19:05:50.931084  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:50.931482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:51.431250  161014 type.go:168] "Request Body" body=""
	I1009 19:05:51.431347  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:51.431706  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:51.931601  161014 type.go:168] "Request Body" body=""
	I1009 19:05:51.931698  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:51.932063  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:52.430862  161014 type.go:168] "Request Body" body=""
	I1009 19:05:52.430953  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:52.431298  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:52.931085  161014 type.go:168] "Request Body" body=""
	I1009 19:05:52.931179  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:52.931557  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:52.931624  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:53.431339  161014 type.go:168] "Request Body" body=""
	I1009 19:05:53.431459  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:53.431829  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:53.931637  161014 type.go:168] "Request Body" body=""
	I1009 19:05:53.931736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:53.932120  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:54.430920  161014 type.go:168] "Request Body" body=""
	I1009 19:05:54.431014  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:54.431426  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:54.931205  161014 type.go:168] "Request Body" body=""
	I1009 19:05:54.931299  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:54.931695  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:54.931776  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:55.431596  161014 type.go:168] "Request Body" body=""
	I1009 19:05:55.431674  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:55.432023  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:55.931858  161014 type.go:168] "Request Body" body=""
	I1009 19:05:55.931949  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:55.932317  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:56.431017  161014 type.go:168] "Request Body" body=""
	I1009 19:05:56.431102  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:56.431477  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:56.931242  161014 type.go:168] "Request Body" body=""
	I1009 19:05:56.931328  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:56.931740  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:56.931822  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:57.431701  161014 type.go:168] "Request Body" body=""
	I1009 19:05:57.431787  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:57.432169  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:57.931004  161014 type.go:168] "Request Body" body=""
	I1009 19:05:57.931088  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:57.931492  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:58.430896  161014 type.go:168] "Request Body" body=""
	I1009 19:05:58.430977  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:58.431316  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:58.931136  161014 type.go:168] "Request Body" body=""
	I1009 19:05:58.931305  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:58.931701  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:05:59.431527  161014 type.go:168] "Request Body" body=""
	I1009 19:05:59.431619  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:59.431986  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:05:59.432056  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:05:59.931914  161014 type.go:168] "Request Body" body=""
	I1009 19:05:59.932022  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:05:59.932451  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:00.431235  161014 type.go:168] "Request Body" body=""
	I1009 19:06:00.431321  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:00.431700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:00.931491  161014 type.go:168] "Request Body" body=""
	I1009 19:06:00.931598  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:00.932038  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:01.430870  161014 type.go:168] "Request Body" body=""
	I1009 19:06:01.430962  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:01.431351  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:01.931160  161014 type.go:168] "Request Body" body=""
	I1009 19:06:01.931259  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:01.931701  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:01.931781  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:02.431642  161014 type.go:168] "Request Body" body=""
	I1009 19:06:02.431741  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:02.432105  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:02.930912  161014 type.go:168] "Request Body" body=""
	I1009 19:06:02.931026  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:02.931440  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:03.431233  161014 type.go:168] "Request Body" body=""
	I1009 19:06:03.431316  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:03.431698  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:03.931548  161014 type.go:168] "Request Body" body=""
	I1009 19:06:03.931627  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:03.932000  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:03.932085  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:04.431884  161014 type.go:168] "Request Body" body=""
	I1009 19:06:04.431963  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:04.432329  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:04.931167  161014 type.go:168] "Request Body" body=""
	I1009 19:06:04.931256  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:04.931675  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:05.431519  161014 type.go:168] "Request Body" body=""
	I1009 19:06:05.431593  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:05.431983  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:05.931927  161014 type.go:168] "Request Body" body=""
	I1009 19:06:05.932019  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:05.932421  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:05.932517  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:06.431278  161014 type.go:168] "Request Body" body=""
	I1009 19:06:06.431359  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:06.431798  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:06.931667  161014 type.go:168] "Request Body" body=""
	I1009 19:06:06.931753  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:06.932149  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:07.430942  161014 type.go:168] "Request Body" body=""
	I1009 19:06:07.431028  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:07.431419  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:07.513672  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:06:07.571073  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:07.571125  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:07.571145  161014 retry.go:31] will retry after 23.39838606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:07.931687  161014 type.go:168] "Request Body" body=""
	I1009 19:06:07.931784  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:07.932135  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:08.430924  161014 type.go:168] "Request Body" body=""
	I1009 19:06:08.431034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:08.431403  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:08.431469  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:08.931208  161014 type.go:168] "Request Body" body=""
	I1009 19:06:08.931288  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:08.931643  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:09.431520  161014 type.go:168] "Request Body" body=""
	I1009 19:06:09.431629  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:09.432018  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:09.931868  161014 type.go:168] "Request Body" body=""
	I1009 19:06:09.931945  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:09.932304  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:10.431151  161014 type.go:168] "Request Body" body=""
	I1009 19:06:10.431248  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:10.431669  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:10.431738  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:10.931500  161014 type.go:168] "Request Body" body=""
	I1009 19:06:10.931584  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:10.931948  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:11.431952  161014 type.go:168] "Request Body" body=""
	I1009 19:06:11.432052  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:11.432455  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:11.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:06:11.931310  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:11.931689  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:12.181131  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:06:12.238294  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:12.238358  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:12.238405  161014 retry.go:31] will retry after 21.481583015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:06:12.431681  161014 type.go:168] "Request Body" body=""
	I1009 19:06:12.431761  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:12.432057  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:12.432128  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:12.931845  161014 type.go:168] "Request Body" body=""
	I1009 19:06:12.931939  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:12.932415  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:13.431004  161014 type.go:168] "Request Body" body=""
	I1009 19:06:13.431117  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:13.431483  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:13.931308  161014 type.go:168] "Request Body" body=""
	I1009 19:06:13.931404  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:13.931807  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:14.431415  161014 type.go:168] "Request Body" body=""
	I1009 19:06:14.431502  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:14.431906  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:14.931635  161014 type.go:168] "Request Body" body=""
	I1009 19:06:14.931725  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:14.932138  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:14.932205  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:15.431840  161014 type.go:168] "Request Body" body=""
	I1009 19:06:15.431927  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:15.432292  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:15.930896  161014 type.go:168] "Request Body" body=""
	I1009 19:06:15.930996  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:15.931404  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:16.431000  161014 type.go:168] "Request Body" body=""
	I1009 19:06:16.431088  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:16.431482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:16.931089  161014 type.go:168] "Request Body" body=""
	I1009 19:06:16.931169  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:16.931606  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:17.431187  161014 type.go:168] "Request Body" body=""
	I1009 19:06:17.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:17.431645  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:17.431717  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:17.931505  161014 type.go:168] "Request Body" body=""
	I1009 19:06:17.931588  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:17.931977  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:18.431663  161014 type.go:168] "Request Body" body=""
	I1009 19:06:18.431753  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:18.432133  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:18.931039  161014 type.go:168] "Request Body" body=""
	I1009 19:06:18.931125  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:18.931496  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:19.431026  161014 type.go:168] "Request Body" body=""
	I1009 19:06:19.431101  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:19.431425  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:19.931079  161014 type.go:168] "Request Body" body=""
	I1009 19:06:19.931160  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:19.931533  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:19.931605  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:20.431142  161014 type.go:168] "Request Body" body=""
	I1009 19:06:20.431225  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:20.431606  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:20.931205  161014 type.go:168] "Request Body" body=""
	I1009 19:06:20.931288  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:20.931685  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:21.431270  161014 type.go:168] "Request Body" body=""
	I1009 19:06:21.431352  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:21.431727  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:21.931351  161014 type.go:168] "Request Body" body=""
	I1009 19:06:21.931459  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:21.931867  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:21.931960  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:22.431630  161014 type.go:168] "Request Body" body=""
	I1009 19:06:22.431720  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:22.432112  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:22.931909  161014 type.go:168] "Request Body" body=""
	I1009 19:06:22.932006  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:22.932466  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:23.431019  161014 type.go:168] "Request Body" body=""
	I1009 19:06:23.431108  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:23.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:23.931352  161014 type.go:168] "Request Body" body=""
	I1009 19:06:23.931464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:23.931866  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:24.430863  161014 type.go:168] "Request Body" body=""
	I1009 19:06:24.430951  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:24.431355  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:24.431478  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:24.930971  161014 type.go:168] "Request Body" body=""
	I1009 19:06:24.931061  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:24.931476  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:25.431052  161014 type.go:168] "Request Body" body=""
	I1009 19:06:25.431143  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:25.431497  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:25.931072  161014 type.go:168] "Request Body" body=""
	I1009 19:06:25.931164  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:25.931594  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:26.430916  161014 type.go:168] "Request Body" body=""
	I1009 19:06:26.431010  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:26.431407  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:26.931057  161014 type.go:168] "Request Body" body=""
	I1009 19:06:26.931135  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:26.931533  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:26.931610  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:27.431142  161014 type.go:168] "Request Body" body=""
	I1009 19:06:27.431220  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:27.431622  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:27.931665  161014 type.go:168] "Request Body" body=""
	I1009 19:06:27.931758  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:27.932163  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:28.431861  161014 type.go:168] "Request Body" body=""
	I1009 19:06:28.431949  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:28.432310  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:28.931285  161014 type.go:168] "Request Body" body=""
	I1009 19:06:28.931362  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:28.931821  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:28.931892  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:29.431462  161014 type.go:168] "Request Body" body=""
	I1009 19:06:29.431547  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:29.432004  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:29.931701  161014 type.go:168] "Request Body" body=""
	I1009 19:06:29.931782  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:29.932180  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:30.431935  161014 type.go:168] "Request Body" body=""
	I1009 19:06:30.432026  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:30.432405  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:30.931026  161014 type.go:168] "Request Body" body=""
	I1009 19:06:30.931109  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:30.931522  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:30.970755  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:06:31.028107  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:31.028174  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:31.028309  161014 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:06:31.431764  161014 type.go:168] "Request Body" body=""
	I1009 19:06:31.431853  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:31.432208  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:31.432284  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:31.930867  161014 type.go:168] "Request Body" body=""
	I1009 19:06:31.930984  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:31.931336  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:32.430958  161014 type.go:168] "Request Body" body=""
	I1009 19:06:32.431047  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:32.431465  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:32.931031  161014 type.go:168] "Request Body" body=""
	I1009 19:06:32.931127  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:32.931496  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:33.431116  161014 type.go:168] "Request Body" body=""
	I1009 19:06:33.431195  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:33.431601  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:33.721082  161014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:06:33.781514  161014 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:33.781597  161014 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:06:33.781723  161014 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:06:33.784570  161014 out.go:179] * Enabled addons: 
	I1009 19:06:33.786444  161014 addons.go:514] duration metric: took 1m35.976729521s for enable addons: enabled=[]
	I1009 19:06:33.931193  161014 type.go:168] "Request Body" body=""
	I1009 19:06:33.931298  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:33.931708  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:33.931785  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:34.431608  161014 type.go:168] "Request Body" body=""
	I1009 19:06:34.431688  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:34.432050  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:34.931894  161014 type.go:168] "Request Body" body=""
	I1009 19:06:34.931982  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:34.932369  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:35.431177  161014 type.go:168] "Request Body" body=""
	I1009 19:06:35.431261  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:35.431656  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:35.931508  161014 type.go:168] "Request Body" body=""
	I1009 19:06:35.931632  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:35.932017  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:35.932080  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:36.431933  161014 type.go:168] "Request Body" body=""
	I1009 19:06:36.432042  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:36.432446  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:36.931225  161014 type.go:168] "Request Body" body=""
	I1009 19:06:36.931328  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:36.931704  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:37.431625  161014 type.go:168] "Request Body" body=""
	I1009 19:06:37.431738  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:37.432141  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:37.930857  161014 type.go:168] "Request Body" body=""
	I1009 19:06:37.930995  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:37.931342  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:38.431133  161014 type.go:168] "Request Body" body=""
	I1009 19:06:38.431214  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:38.431597  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:38.431683  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:38.931462  161014 type.go:168] "Request Body" body=""
	I1009 19:06:38.931563  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:38.931971  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:39.431871  161014 type.go:168] "Request Body" body=""
	I1009 19:06:39.431963  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:39.432315  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:39.931128  161014 type.go:168] "Request Body" body=""
	I1009 19:06:39.931220  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:39.931618  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:40.431437  161014 type.go:168] "Request Body" body=""
	I1009 19:06:40.431514  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:40.431890  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:40.431961  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:40.931810  161014 type.go:168] "Request Body" body=""
	I1009 19:06:40.931912  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:40.932290  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:41.431100  161014 type.go:168] "Request Body" body=""
	I1009 19:06:41.431218  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:41.431599  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:41.931346  161014 type.go:168] "Request Body" body=""
	I1009 19:06:41.931468  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:41.931837  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:42.431757  161014 type.go:168] "Request Body" body=""
	I1009 19:06:42.431845  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:42.432237  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:42.432298  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:42.931019  161014 type.go:168] "Request Body" body=""
	I1009 19:06:42.931113  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:42.931521  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:43.431303  161014 type.go:168] "Request Body" body=""
	I1009 19:06:43.431415  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:43.431782  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:43.931780  161014 type.go:168] "Request Body" body=""
	I1009 19:06:43.931864  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:43.932272  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:44.431107  161014 type.go:168] "Request Body" body=""
	I1009 19:06:44.431212  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:44.431609  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:44.931522  161014 type.go:168] "Request Body" body=""
	I1009 19:06:44.931612  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:44.932005  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:44.932091  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:45.430863  161014 type.go:168] "Request Body" body=""
	I1009 19:06:45.430955  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:45.431333  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:45.931195  161014 type.go:168] "Request Body" body=""
	I1009 19:06:45.931296  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:45.931727  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:46.431598  161014 type.go:168] "Request Body" body=""
	I1009 19:06:46.431682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:46.432089  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:46.930909  161014 type.go:168] "Request Body" body=""
	I1009 19:06:46.931014  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:46.931410  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:47.431166  161014 type.go:168] "Request Body" body=""
	I1009 19:06:47.431244  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:47.431610  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:47.431679  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:47.931409  161014 type.go:168] "Request Body" body=""
	I1009 19:06:47.931495  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:47.931852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:48.431707  161014 type.go:168] "Request Body" body=""
	I1009 19:06:48.431803  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:48.432224  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:48.931113  161014 type.go:168] "Request Body" body=""
	I1009 19:06:48.931196  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:48.931590  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:49.431438  161014 type.go:168] "Request Body" body=""
	I1009 19:06:49.431532  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:49.431933  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:49.432014  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:49.931847  161014 type.go:168] "Request Body" body=""
	I1009 19:06:49.931955  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:49.932365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:50.431151  161014 type.go:168] "Request Body" body=""
	I1009 19:06:50.431252  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:50.431731  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:50.931584  161014 type.go:168] "Request Body" body=""
	I1009 19:06:50.931668  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:50.932034  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:51.431892  161014 type.go:168] "Request Body" body=""
	I1009 19:06:51.432001  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:51.432357  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:51.432451  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:51.931169  161014 type.go:168] "Request Body" body=""
	I1009 19:06:51.931251  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:51.931649  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:52.431585  161014 type.go:168] "Request Body" body=""
	I1009 19:06:52.431683  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:52.432058  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:52.931903  161014 type.go:168] "Request Body" body=""
	I1009 19:06:52.931994  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:52.932365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:53.431140  161014 type.go:168] "Request Body" body=""
	I1009 19:06:53.431240  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:53.431607  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:53.931515  161014 type.go:168] "Request Body" body=""
	I1009 19:06:53.931602  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:53.931970  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:53.932045  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:54.431874  161014 type.go:168] "Request Body" body=""
	I1009 19:06:54.431956  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:54.432333  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:54.931110  161014 type.go:168] "Request Body" body=""
	I1009 19:06:54.931191  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:54.931572  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:55.431313  161014 type.go:168] "Request Body" body=""
	I1009 19:06:55.431422  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:55.431759  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:55.931628  161014 type.go:168] "Request Body" body=""
	I1009 19:06:55.931708  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:55.932052  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:55.932122  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:56.430861  161014 type.go:168] "Request Body" body=""
	I1009 19:06:56.430953  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:56.431299  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:56.931073  161014 type.go:168] "Request Body" body=""
	I1009 19:06:56.931162  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:56.931537  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:57.431318  161014 type.go:168] "Request Body" body=""
	I1009 19:06:57.431417  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:57.431759  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:57.931618  161014 type.go:168] "Request Body" body=""
	I1009 19:06:57.931839  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:57.932218  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:06:57.932279  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:06:58.431144  161014 type.go:168] "Request Body" body=""
	I1009 19:06:58.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:58.431970  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:58.931861  161014 type.go:168] "Request Body" body=""
	I1009 19:06:58.931940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:58.932311  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:59.431143  161014 type.go:168] "Request Body" body=""
	I1009 19:06:59.431223  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:59.431592  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:06:59.930909  161014 type.go:168] "Request Body" body=""
	I1009 19:06:59.931020  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:06:59.931371  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:00.430999  161014 type.go:168] "Request Body" body=""
	I1009 19:07:00.431081  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:00.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:00.431566  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:00.931093  161014 type.go:168] "Request Body" body=""
	I1009 19:07:00.931180  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:00.931581  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:01.431360  161014 type.go:168] "Request Body" body=""
	I1009 19:07:01.431464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:01.431832  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:01.931692  161014 type.go:168] "Request Body" body=""
	I1009 19:07:01.931784  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:01.932184  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:02.430934  161014 type.go:168] "Request Body" body=""
	I1009 19:07:02.431018  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:02.431378  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:02.931191  161014 type.go:168] "Request Body" body=""
	I1009 19:07:02.931275  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:02.931689  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:02.931756  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:03.431523  161014 type.go:168] "Request Body" body=""
	I1009 19:07:03.431604  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:03.431991  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:03.930871  161014 type.go:168] "Request Body" body=""
	I1009 19:07:03.930969  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:03.931407  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:04.431200  161014 type.go:168] "Request Body" body=""
	I1009 19:07:04.431281  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:04.431686  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:04.931603  161014 type.go:168] "Request Body" body=""
	I1009 19:07:04.931684  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:04.932085  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:04.932154  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:05.430888  161014 type.go:168] "Request Body" body=""
	I1009 19:07:05.430980  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:05.431365  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:05.931176  161014 type.go:168] "Request Body" body=""
	I1009 19:07:05.931266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:05.931718  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:06.431607  161014 type.go:168] "Request Body" body=""
	I1009 19:07:06.431688  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:06.432075  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:06.930900  161014 type.go:168] "Request Body" body=""
	I1009 19:07:06.931004  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:06.931420  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:07.431211  161014 type.go:168] "Request Body" body=""
	I1009 19:07:07.431297  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:07.431674  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:07.431738  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:07.931521  161014 type.go:168] "Request Body" body=""
	I1009 19:07:07.931600  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:07.931988  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:08.431938  161014 type.go:168] "Request Body" body=""
	I1009 19:07:08.432023  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:08.432368  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:08.931198  161014 type.go:168] "Request Body" body=""
	I1009 19:07:08.931276  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:08.931670  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:09.431634  161014 type.go:168] "Request Body" body=""
	I1009 19:07:09.431764  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:09.432193  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:09.432271  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:09.931021  161014 type.go:168] "Request Body" body=""
	I1009 19:07:09.931112  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:09.931511  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:10.431319  161014 type.go:168] "Request Body" body=""
	I1009 19:07:10.431421  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:10.431776  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:10.931586  161014 type.go:168] "Request Body" body=""
	I1009 19:07:10.931675  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:10.932028  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:11.431928  161014 type.go:168] "Request Body" body=""
	I1009 19:07:11.432018  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:11.432409  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:11.432493  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:11.931228  161014 type.go:168] "Request Body" body=""
	I1009 19:07:11.931314  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:11.931691  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:12.431493  161014 type.go:168] "Request Body" body=""
	I1009 19:07:12.431576  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:12.431970  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:12.931830  161014 type.go:168] "Request Body" body=""
	I1009 19:07:12.931910  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:12.932268  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:13.431040  161014 type.go:168] "Request Body" body=""
	I1009 19:07:13.431128  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:13.431515  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:13.931313  161014 type.go:168] "Request Body" body=""
	I1009 19:07:13.931411  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:13.931829  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:13.931895  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:14.431732  161014 type.go:168] "Request Body" body=""
	I1009 19:07:14.431830  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:14.432198  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:14.931016  161014 type.go:168] "Request Body" body=""
	I1009 19:07:14.931107  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:14.931472  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:15.431233  161014 type.go:168] "Request Body" body=""
	I1009 19:07:15.431326  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:15.431754  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:15.931605  161014 type.go:168] "Request Body" body=""
	I1009 19:07:15.931691  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:15.932042  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:15.932112  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:16.430847  161014 type.go:168] "Request Body" body=""
	I1009 19:07:16.430926  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:16.431288  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:16.931059  161014 type.go:168] "Request Body" body=""
	I1009 19:07:16.931135  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:16.931483  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:17.431236  161014 type.go:168] "Request Body" body=""
	I1009 19:07:17.431328  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:17.431725  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:17.931584  161014 type.go:168] "Request Body" body=""
	I1009 19:07:17.931680  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:17.932068  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:17.932144  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:18.430878  161014 type.go:168] "Request Body" body=""
	I1009 19:07:18.430959  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:18.431336  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:18.931220  161014 type.go:168] "Request Body" body=""
	I1009 19:07:18.931308  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:18.931716  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:19.431622  161014 type.go:168] "Request Body" body=""
	I1009 19:07:19.431711  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:19.432084  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:19.930887  161014 type.go:168] "Request Body" body=""
	I1009 19:07:19.930970  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:19.931335  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:20.431128  161014 type.go:168] "Request Body" body=""
	I1009 19:07:20.431228  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:20.431607  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:20.431677  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:20.931571  161014 type.go:168] "Request Body" body=""
	I1009 19:07:20.931652  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:20.932025  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:21.431914  161014 type.go:168] "Request Body" body=""
	I1009 19:07:21.432004  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:21.432437  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:21.931260  161014 type.go:168] "Request Body" body=""
	I1009 19:07:21.931349  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:21.931776  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:22.431637  161014 type.go:168] "Request Body" body=""
	I1009 19:07:22.431729  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:22.432091  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:22.432158  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:22.930926  161014 type.go:168] "Request Body" body=""
	I1009 19:07:22.931021  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:22.931412  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:23.431182  161014 type.go:168] "Request Body" body=""
	I1009 19:07:23.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:23.431631  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:23.931458  161014 type.go:168] "Request Body" body=""
	I1009 19:07:23.931550  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:23.931920  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:24.431853  161014 type.go:168] "Request Body" body=""
	I1009 19:07:24.431948  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:24.432326  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:24.432422  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:24.931143  161014 type.go:168] "Request Body" body=""
	I1009 19:07:24.931223  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:24.931599  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:25.431358  161014 type.go:168] "Request Body" body=""
	I1009 19:07:25.431464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:25.431821  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:25.931703  161014 type.go:168] "Request Body" body=""
	I1009 19:07:25.931787  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:25.932180  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:26.430976  161014 type.go:168] "Request Body" body=""
	I1009 19:07:26.431075  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:26.431458  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:26.931245  161014 type.go:168] "Request Body" body=""
	I1009 19:07:26.931331  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:26.931713  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:26.931784  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:27.431576  161014 type.go:168] "Request Body" body=""
	I1009 19:07:27.431668  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:27.432031  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:27.931772  161014 type.go:168] "Request Body" body=""
	I1009 19:07:27.931862  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:27.932254  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:28.431022  161014 type.go:168] "Request Body" body=""
	I1009 19:07:28.431102  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:28.431482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:28.931348  161014 type.go:168] "Request Body" body=""
	I1009 19:07:28.931459  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:28.931844  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:28.931911  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:29.431781  161014 type.go:168] "Request Body" body=""
	I1009 19:07:29.431865  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:29.432226  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:29.931019  161014 type.go:168] "Request Body" body=""
	I1009 19:07:29.931099  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:29.931495  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:30.431235  161014 type.go:168] "Request Body" body=""
	I1009 19:07:30.431318  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:30.431699  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:30.931618  161014 type.go:168] "Request Body" body=""
	I1009 19:07:30.931726  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:30.932096  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:30.932155  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:31.430950  161014 type.go:168] "Request Body" body=""
	I1009 19:07:31.431039  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:31.431429  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:31.931226  161014 type.go:168] "Request Body" body=""
	I1009 19:07:31.931324  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:31.931743  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:32.431688  161014 type.go:168] "Request Body" body=""
	I1009 19:07:32.431781  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:32.432184  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:32.930987  161014 type.go:168] "Request Body" body=""
	I1009 19:07:32.931070  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:32.931473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:33.431239  161014 type.go:168] "Request Body" body=""
	I1009 19:07:33.431321  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:33.431722  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:33.431792  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:33.931516  161014 type.go:168] "Request Body" body=""
	I1009 19:07:33.931606  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:33.931963  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:34.431848  161014 type.go:168] "Request Body" body=""
	I1009 19:07:34.431929  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:34.432337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:34.931149  161014 type.go:168] "Request Body" body=""
	I1009 19:07:34.931233  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:34.931610  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:35.431433  161014 type.go:168] "Request Body" body=""
	I1009 19:07:35.431519  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:35.431884  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:35.431951  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:35.931757  161014 type.go:168] "Request Body" body=""
	I1009 19:07:35.931834  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:35.932194  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:36.431002  161014 type.go:168] "Request Body" body=""
	I1009 19:07:36.431092  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:36.431521  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:36.931304  161014 type.go:168] "Request Body" body=""
	I1009 19:07:36.931415  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:36.931771  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:37.431635  161014 type.go:168] "Request Body" body=""
	I1009 19:07:37.431735  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:37.432135  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:37.432203  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:37.931637  161014 type.go:168] "Request Body" body=""
	I1009 19:07:37.931755  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:37.932124  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:38.430922  161014 type.go:168] "Request Body" body=""
	I1009 19:07:38.431020  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:38.431405  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:38.931217  161014 type.go:168] "Request Body" body=""
	I1009 19:07:38.931295  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:38.931651  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:39.431495  161014 type.go:168] "Request Body" body=""
	I1009 19:07:39.431575  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:39.431936  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:39.931858  161014 type.go:168] "Request Body" body=""
	I1009 19:07:39.931940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:39.932326  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:39.932421  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:40.431161  161014 type.go:168] "Request Body" body=""
	I1009 19:07:40.431255  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:40.431615  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:40.931366  161014 type.go:168] "Request Body" body=""
	I1009 19:07:40.931491  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:40.931869  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:41.431767  161014 type.go:168] "Request Body" body=""
	I1009 19:07:41.431861  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:41.432350  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:41.931160  161014 type.go:168] "Request Body" body=""
	I1009 19:07:41.931250  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:41.931735  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:42.431633  161014 type.go:168] "Request Body" body=""
	I1009 19:07:42.431732  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:42.432111  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:42.432176  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:42.930929  161014 type.go:168] "Request Body" body=""
	I1009 19:07:42.931031  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:42.931442  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:43.431234  161014 type.go:168] "Request Body" body=""
	I1009 19:07:43.431336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:43.431722  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:43.931601  161014 type.go:168] "Request Body" body=""
	I1009 19:07:43.931683  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:43.932053  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:44.430865  161014 type.go:168] "Request Body" body=""
	I1009 19:07:44.430947  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:44.431356  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:44.931167  161014 type.go:168] "Request Body" body=""
	I1009 19:07:44.931250  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:44.931627  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:44.931696  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:45.431431  161014 type.go:168] "Request Body" body=""
	I1009 19:07:45.431510  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:45.431847  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:45.931770  161014 type.go:168] "Request Body" body=""
	I1009 19:07:45.931853  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:45.932210  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:46.430939  161014 type.go:168] "Request Body" body=""
	I1009 19:07:46.431018  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:46.431347  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:46.931133  161014 type.go:168] "Request Body" body=""
	I1009 19:07:46.931213  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:46.931599  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:47.431337  161014 type.go:168] "Request Body" body=""
	I1009 19:07:47.431449  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:47.431806  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:47.431876  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:47.931598  161014 type.go:168] "Request Body" body=""
	I1009 19:07:47.931682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:47.932028  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:48.431835  161014 type.go:168] "Request Body" body=""
	I1009 19:07:48.431919  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:48.432273  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:48.931089  161014 type.go:168] "Request Body" body=""
	I1009 19:07:48.931179  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:48.931527  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:49.431272  161014 type.go:168] "Request Body" body=""
	I1009 19:07:49.431350  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:49.431715  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:49.931579  161014 type.go:168] "Request Body" body=""
	I1009 19:07:49.931664  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:49.932040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:49.932107  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:50.431582  161014 type.go:168] "Request Body" body=""
	I1009 19:07:50.431662  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:50.432003  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:50.931872  161014 type.go:168] "Request Body" body=""
	I1009 19:07:50.931951  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:50.932322  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:51.431016  161014 type.go:168] "Request Body" body=""
	I1009 19:07:51.431095  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:51.431478  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:51.931270  161014 type.go:168] "Request Body" body=""
	I1009 19:07:51.931349  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:51.931734  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:52.431662  161014 type.go:168] "Request Body" body=""
	I1009 19:07:52.431743  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:52.432165  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:52.432255  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:52.931027  161014 type.go:168] "Request Body" body=""
	I1009 19:07:52.931111  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:52.931524  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:53.431299  161014 type.go:168] "Request Body" body=""
	I1009 19:07:53.431409  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:53.431777  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:53.931692  161014 type.go:168] "Request Body" body=""
	I1009 19:07:53.931802  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:53.932188  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:54.431033  161014 type.go:168] "Request Body" body=""
	I1009 19:07:54.431116  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:54.431506  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:54.931262  161014 type.go:168] "Request Body" body=""
	I1009 19:07:54.931371  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:54.931818  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:54.931896  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:55.431748  161014 type.go:168] "Request Body" body=""
	I1009 19:07:55.431839  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:55.432227  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:55.931001  161014 type.go:168] "Request Body" body=""
	I1009 19:07:55.931091  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:55.931464  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:56.431257  161014 type.go:168] "Request Body" body=""
	I1009 19:07:56.431342  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:56.431727  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:56.931602  161014 type.go:168] "Request Body" body=""
	I1009 19:07:56.931701  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:56.932081  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:56.932152  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:57.430910  161014 type.go:168] "Request Body" body=""
	I1009 19:07:57.431002  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:57.431362  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:57.931308  161014 type.go:168] "Request Body" body=""
	I1009 19:07:57.931413  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:57.931773  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:58.431643  161014 type.go:168] "Request Body" body=""
	I1009 19:07:58.431802  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:58.432134  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:58.931081  161014 type.go:168] "Request Body" body=""
	I1009 19:07:58.931169  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:58.931540  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:07:59.431310  161014 type.go:168] "Request Body" body=""
	I1009 19:07:59.431416  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:59.431835  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:07:59.431910  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:07:59.931738  161014 type.go:168] "Request Body" body=""
	I1009 19:07:59.931826  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:07:59.932198  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:00.430977  161014 type.go:168] "Request Body" body=""
	I1009 19:08:00.431073  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:00.431459  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:00.931240  161014 type.go:168] "Request Body" body=""
	I1009 19:08:00.931327  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:00.931726  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:01.431608  161014 type.go:168] "Request Body" body=""
	I1009 19:08:01.431703  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:01.432081  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:01.432155  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:01.930901  161014 type.go:168] "Request Body" body=""
	I1009 19:08:01.930998  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:01.931353  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:02.431155  161014 type.go:168] "Request Body" body=""
	I1009 19:08:02.431246  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:02.431683  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:02.931507  161014 type.go:168] "Request Body" body=""
	I1009 19:08:02.931648  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:02.932004  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:03.431604  161014 type.go:168] "Request Body" body=""
	I1009 19:08:03.431682  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:03.432043  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:03.930851  161014 type.go:168] "Request Body" body=""
	I1009 19:08:03.930932  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:03.931328  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:03.931434  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:04.431148  161014 type.go:168] "Request Body" body=""
	I1009 19:08:04.431226  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:04.431671  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:04.931497  161014 type.go:168] "Request Body" body=""
	I1009 19:08:04.931576  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:04.931933  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:05.431818  161014 type.go:168] "Request Body" body=""
	I1009 19:08:05.431913  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:05.432337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:05.931111  161014 type.go:168] "Request Body" body=""
	I1009 19:08:05.931188  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:05.931598  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:05.931665  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:06.431433  161014 type.go:168] "Request Body" body=""
	I1009 19:08:06.431518  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:06.431897  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:06.931739  161014 type.go:168] "Request Body" body=""
	I1009 19:08:06.931825  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:06.932190  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:07.431010  161014 type.go:168] "Request Body" body=""
	I1009 19:08:07.431098  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:07.431492  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:07.931321  161014 type.go:168] "Request Body" body=""
	I1009 19:08:07.931478  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:07.931847  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:07.931911  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:08.431736  161014 type.go:168] "Request Body" body=""
	I1009 19:08:08.431826  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:08.432199  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:08.931147  161014 type.go:168] "Request Body" body=""
	I1009 19:08:08.931256  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:08.931581  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:09.431348  161014 type.go:168] "Request Body" body=""
	I1009 19:08:09.431501  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:09.431847  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:09.931761  161014 type.go:168] "Request Body" body=""
	I1009 19:08:09.931868  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:09.932264  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:09.932358  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:10.431111  161014 type.go:168] "Request Body" body=""
	I1009 19:08:10.431226  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:10.431600  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:10.931402  161014 type.go:168] "Request Body" body=""
	I1009 19:08:10.931502  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:10.931871  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:11.431784  161014 type.go:168] "Request Body" body=""
	I1009 19:08:11.431872  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:11.432233  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:11.931048  161014 type.go:168] "Request Body" body=""
	I1009 19:08:11.931144  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:11.931576  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:12.431421  161014 type.go:168] "Request Body" body=""
	I1009 19:08:12.431503  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:12.431862  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:12.431928  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:12.931757  161014 type.go:168] "Request Body" body=""
	I1009 19:08:12.931854  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:12.932305  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:13.431097  161014 type.go:168] "Request Body" body=""
	I1009 19:08:13.431185  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:13.431628  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:13.931448  161014 type.go:168] "Request Body" body=""
	I1009 19:08:13.931544  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:13.931895  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:14.431813  161014 type.go:168] "Request Body" body=""
	I1009 19:08:14.431896  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:14.432325  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:14.432452  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:14.931193  161014 type.go:168] "Request Body" body=""
	I1009 19:08:14.931304  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:14.931724  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:15.431610  161014 type.go:168] "Request Body" body=""
	I1009 19:08:15.431784  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:15.432189  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:15.930996  161014 type.go:168] "Request Body" body=""
	I1009 19:08:15.931076  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:15.931476  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:16.431279  161014 type.go:168] "Request Body" body=""
	I1009 19:08:16.431364  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:16.431823  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:16.931708  161014 type.go:168] "Request Body" body=""
	I1009 19:08:16.931791  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:16.932165  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:16.932241  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:17.430990  161014 type.go:168] "Request Body" body=""
	I1009 19:08:17.431074  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:17.431506  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:17.931431  161014 type.go:168] "Request Body" body=""
	I1009 19:08:17.931525  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:17.931892  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:18.431806  161014 type.go:168] "Request Body" body=""
	I1009 19:08:18.431897  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:18.432299  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:18.931120  161014 type.go:168] "Request Body" body=""
	I1009 19:08:18.931214  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:18.931614  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:19.431514  161014 type.go:168] "Request Body" body=""
	I1009 19:08:19.431606  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:19.432047  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:19.432124  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:19.931598  161014 type.go:168] "Request Body" body=""
	I1009 19:08:19.931691  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:19.932042  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:20.431891  161014 type.go:168] "Request Body" body=""
	I1009 19:08:20.431971  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:20.432405  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:20.931180  161014 type.go:168] "Request Body" body=""
	I1009 19:08:20.931263  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:20.931621  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:21.431543  161014 type.go:168] "Request Body" body=""
	I1009 19:08:21.431622  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:21.432021  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:21.931880  161014 type.go:168] "Request Body" body=""
	I1009 19:08:21.931973  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:21.932344  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:21.932455  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:22.431220  161014 type.go:168] "Request Body" body=""
	I1009 19:08:22.431312  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:22.431735  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:22.931611  161014 type.go:168] "Request Body" body=""
	I1009 19:08:22.931692  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:22.932047  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:23.430844  161014 type.go:168] "Request Body" body=""
	I1009 19:08:23.430928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:23.431339  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:23.931177  161014 type.go:168] "Request Body" body=""
	I1009 19:08:23.931280  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:23.931703  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:24.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:08:24.431623  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:24.432029  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:24.432099  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:24.930846  161014 type.go:168] "Request Body" body=""
	I1009 19:08:24.930940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:24.931301  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:25.431093  161014 type.go:168] "Request Body" body=""
	I1009 19:08:25.431180  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:25.431586  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:25.931364  161014 type.go:168] "Request Body" body=""
	I1009 19:08:25.931490  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:25.931848  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:26.431757  161014 type.go:168] "Request Body" body=""
	I1009 19:08:26.431844  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:26.432286  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:26.432356  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:26.931111  161014 type.go:168] "Request Body" body=""
	I1009 19:08:26.931219  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:26.931654  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:27.431562  161014 type.go:168] "Request Body" body=""
	I1009 19:08:27.431657  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:27.432104  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:27.931917  161014 type.go:168] "Request Body" body=""
	I1009 19:08:27.932031  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:27.932479  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:28.431253  161014 type.go:168] "Request Body" body=""
	I1009 19:08:28.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:28.431741  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:28.931701  161014 type.go:168] "Request Body" body=""
	I1009 19:08:28.931793  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:28.932147  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:28.932231  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:29.430994  161014 type.go:168] "Request Body" body=""
	I1009 19:08:29.431076  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:29.431507  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:29.931284  161014 type.go:168] "Request Body" body=""
	I1009 19:08:29.931372  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:29.931786  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:30.431725  161014 type.go:168] "Request Body" body=""
	I1009 19:08:30.431807  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:30.432196  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:30.930995  161014 type.go:168] "Request Body" body=""
	I1009 19:08:30.931086  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:30.931489  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:31.431293  161014 type.go:168] "Request Body" body=""
	I1009 19:08:31.431407  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:31.431802  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:31.431899  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:31.931763  161014 type.go:168] "Request Body" body=""
	I1009 19:08:31.931847  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:31.932233  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:32.431064  161014 type.go:168] "Request Body" body=""
	I1009 19:08:32.431143  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:32.431569  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:32.931367  161014 type.go:168] "Request Body" body=""
	I1009 19:08:32.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:32.931834  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:33.431666  161014 type.go:168] "Request Body" body=""
	I1009 19:08:33.431746  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:33.432152  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:33.432228  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:33.931085  161014 type.go:168] "Request Body" body=""
	I1009 19:08:33.931187  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:33.931603  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:34.431399  161014 type.go:168] "Request Body" body=""
	I1009 19:08:34.431485  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:34.431891  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:34.931782  161014 type.go:168] "Request Body" body=""
	I1009 19:08:34.931877  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:34.932244  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:35.431033  161014 type.go:168] "Request Body" body=""
	I1009 19:08:35.431120  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:35.431472  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:35.931247  161014 type.go:168] "Request Body" body=""
	I1009 19:08:35.931336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:35.931759  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:35.931829  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:36.431682  161014 type.go:168] "Request Body" body=""
	I1009 19:08:36.431785  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:36.432193  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:36.931013  161014 type.go:168] "Request Body" body=""
	I1009 19:08:36.931099  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:36.931470  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:37.431265  161014 type.go:168] "Request Body" body=""
	I1009 19:08:37.431370  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:37.431819  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:37.931612  161014 type.go:168] "Request Body" body=""
	I1009 19:08:37.931700  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:37.932079  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:37.932145  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:38.430913  161014 type.go:168] "Request Body" body=""
	I1009 19:08:38.431022  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:38.431519  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:38.931233  161014 type.go:168] "Request Body" body=""
	I1009 19:08:38.931319  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:38.931686  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:39.431521  161014 type.go:168] "Request Body" body=""
	I1009 19:08:39.431627  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:39.432049  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:39.931904  161014 type.go:168] "Request Body" body=""
	I1009 19:08:39.932008  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:39.932353  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:39.932451  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:40.431183  161014 type.go:168] "Request Body" body=""
	I1009 19:08:40.431282  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:40.431716  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:40.931624  161014 type.go:168] "Request Body" body=""
	I1009 19:08:40.931713  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:40.932079  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:41.430889  161014 type.go:168] "Request Body" body=""
	I1009 19:08:41.430987  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:41.431423  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:41.931240  161014 type.go:168] "Request Body" body=""
	I1009 19:08:41.931324  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:41.931700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:42.431534  161014 type.go:168] "Request Body" body=""
	I1009 19:08:42.431639  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:42.432064  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:42.432142  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:42.930885  161014 type.go:168] "Request Body" body=""
	I1009 19:08:42.930975  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:42.931354  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:43.431227  161014 type.go:168] "Request Body" body=""
	I1009 19:08:43.431323  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:43.431715  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:43.931552  161014 type.go:168] "Request Body" body=""
	I1009 19:08:43.931632  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:43.931992  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:44.431828  161014 type.go:168] "Request Body" body=""
	I1009 19:08:44.431924  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:44.432325  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:44.432415  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:44.931136  161014 type.go:168] "Request Body" body=""
	I1009 19:08:44.931245  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:44.931664  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:45.431554  161014 type.go:168] "Request Body" body=""
	I1009 19:08:45.431649  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:45.432042  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:45.931929  161014 type.go:168] "Request Body" body=""
	I1009 19:08:45.932032  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:45.932456  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:46.431215  161014 type.go:168] "Request Body" body=""
	I1009 19:08:46.431303  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:46.431675  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:46.931516  161014 type.go:168] "Request Body" body=""
	I1009 19:08:46.931612  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:46.932033  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:46.932105  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:47.431930  161014 type.go:168] "Request Body" body=""
	I1009 19:08:47.432024  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:47.432404  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:47.931253  161014 type.go:168] "Request Body" body=""
	I1009 19:08:47.931351  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:47.931772  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:48.431679  161014 type.go:168] "Request Body" body=""
	I1009 19:08:48.431767  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:48.432147  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:48.930986  161014 type.go:168] "Request Body" body=""
	I1009 19:08:48.931073  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:48.931466  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:49.431246  161014 type.go:168] "Request Body" body=""
	I1009 19:08:49.431332  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:49.431709  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:49.431791  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:49.931583  161014 type.go:168] "Request Body" body=""
	I1009 19:08:49.931665  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:49.932043  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:50.430854  161014 type.go:168] "Request Body" body=""
	I1009 19:08:50.430942  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:50.431310  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:50.931059  161014 type.go:168] "Request Body" body=""
	I1009 19:08:50.931138  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:50.931534  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:51.431317  161014 type.go:168] "Request Body" body=""
	I1009 19:08:51.431423  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:51.431783  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:51.431860  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:51.931678  161014 type.go:168] "Request Body" body=""
	I1009 19:08:51.931770  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:51.932161  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:52.430940  161014 type.go:168] "Request Body" body=""
	I1009 19:08:52.431043  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:52.431471  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:52.931234  161014 type.go:168] "Request Body" body=""
	I1009 19:08:52.931317  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:52.931697  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:53.431539  161014 type.go:168] "Request Body" body=""
	I1009 19:08:53.431626  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:53.432040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:53.432105  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:53.931898  161014 type.go:168] "Request Body" body=""
	I1009 19:08:53.931980  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:53.932334  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:54.431123  161014 type.go:168] "Request Body" body=""
	I1009 19:08:54.431206  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:54.431572  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:54.931007  161014 type.go:168] "Request Body" body=""
	I1009 19:08:54.931094  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:54.931473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:55.431255  161014 type.go:168] "Request Body" body=""
	I1009 19:08:55.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:55.431719  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:55.931595  161014 type.go:168] "Request Body" body=""
	I1009 19:08:55.931684  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:55.932059  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:55.932132  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:56.430905  161014 type.go:168] "Request Body" body=""
	I1009 19:08:56.430996  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:56.431358  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:56.931139  161014 type.go:168] "Request Body" body=""
	I1009 19:08:56.931225  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:56.931614  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:57.431422  161014 type.go:168] "Request Body" body=""
	I1009 19:08:57.431520  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:57.431890  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:57.931717  161014 type.go:168] "Request Body" body=""
	I1009 19:08:57.931804  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:57.932243  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:57.932309  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:08:58.431442  161014 type.go:168] "Request Body" body=""
	I1009 19:08:58.431719  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:58.432305  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:58.931643  161014 type.go:168] "Request Body" body=""
	I1009 19:08:58.931736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:58.932089  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:59.431793  161014 type.go:168] "Request Body" body=""
	I1009 19:08:59.431868  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:59.432216  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:08:59.931889  161014 type.go:168] "Request Body" body=""
	I1009 19:08:59.931982  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:08:59.932322  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:08:59.932430  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:00.430938  161014 type.go:168] "Request Body" body=""
	I1009 19:09:00.431025  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:00.431413  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:00.930953  161014 type.go:168] "Request Body" body=""
	I1009 19:09:00.931042  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:00.931443  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:01.431021  161014 type.go:168] "Request Body" body=""
	I1009 19:09:01.431114  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:01.431513  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:01.931074  161014 type.go:168] "Request Body" body=""
	I1009 19:09:01.931154  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:01.931545  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:02.431340  161014 type.go:168] "Request Body" body=""
	I1009 19:09:02.431449  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:02.431830  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:02.431902  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:02.931823  161014 type.go:168] "Request Body" body=""
	I1009 19:09:02.931913  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:02.932314  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:03.431114  161014 type.go:168] "Request Body" body=""
	I1009 19:09:03.431193  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:03.431578  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:03.931464  161014 type.go:168] "Request Body" body=""
	I1009 19:09:03.931552  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:03.931986  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:04.431831  161014 type.go:168] "Request Body" body=""
	I1009 19:09:04.431934  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:04.432314  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:04.432398  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:04.931129  161014 type.go:168] "Request Body" body=""
	I1009 19:09:04.931216  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:04.931674  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:05.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:09:05.431611  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:05.432021  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:05.931854  161014 type.go:168] "Request Body" body=""
	I1009 19:09:05.931940  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:05.932334  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:06.431088  161014 type.go:168] "Request Body" body=""
	I1009 19:09:06.431167  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:06.431515  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:06.931278  161014 type.go:168] "Request Body" body=""
	I1009 19:09:06.931362  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:06.931748  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:06.931816  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:07.431644  161014 type.go:168] "Request Body" body=""
	I1009 19:09:07.431747  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:07.432178  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:07.931866  161014 type.go:168] "Request Body" body=""
	I1009 19:09:07.931950  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:07.932290  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:08.431090  161014 type.go:168] "Request Body" body=""
	I1009 19:09:08.431172  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:08.431650  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:08.931429  161014 type.go:168] "Request Body" body=""
	I1009 19:09:08.931507  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:08.931843  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:08.931909  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:09.431805  161014 type.go:168] "Request Body" body=""
	I1009 19:09:09.431897  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:09.432328  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:09.931113  161014 type.go:168] "Request Body" body=""
	I1009 19:09:09.931194  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:09.931569  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:10.431340  161014 type.go:168] "Request Body" body=""
	I1009 19:09:10.431473  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:10.431864  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:10.931696  161014 type.go:168] "Request Body" body=""
	I1009 19:09:10.931778  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:10.932062  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:10.932116  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:11.430855  161014 type.go:168] "Request Body" body=""
	I1009 19:09:11.430938  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:11.431371  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:11.931153  161014 type.go:168] "Request Body" body=""
	I1009 19:09:11.931230  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:11.931601  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:12.431453  161014 type.go:168] "Request Body" body=""
	I1009 19:09:12.431539  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:12.431968  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:12.931803  161014 type.go:168] "Request Body" body=""
	I1009 19:09:12.931890  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:12.932230  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:12.932299  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:13.431049  161014 type.go:168] "Request Body" body=""
	I1009 19:09:13.431141  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:13.431581  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:13.931422  161014 type.go:168] "Request Body" body=""
	I1009 19:09:13.931504  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:13.931849  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:14.431710  161014 type.go:168] "Request Body" body=""
	I1009 19:09:14.431803  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:14.432200  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:14.930978  161014 type.go:168] "Request Body" body=""
	I1009 19:09:14.931058  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:14.931421  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:15.431205  161014 type.go:168] "Request Body" body=""
	I1009 19:09:15.431290  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:15.431792  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:15.431868  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:15.931738  161014 type.go:168] "Request Body" body=""
	I1009 19:09:15.931822  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:15.932171  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:16.430949  161014 type.go:168] "Request Body" body=""
	I1009 19:09:16.431033  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:16.431370  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:16.931168  161014 type.go:168] "Request Body" body=""
	I1009 19:09:16.931244  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:16.931635  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:17.431446  161014 type.go:168] "Request Body" body=""
	I1009 19:09:17.431534  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:17.431909  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:17.431982  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:17.931495  161014 type.go:168] "Request Body" body=""
	I1009 19:09:17.931580  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:17.931927  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:18.431744  161014 type.go:168] "Request Body" body=""
	I1009 19:09:18.431828  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:18.432200  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:18.931151  161014 type.go:168] "Request Body" body=""
	I1009 19:09:18.931250  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:18.931652  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:19.431441  161014 type.go:168] "Request Body" body=""
	I1009 19:09:19.431529  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:19.431984  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:19.432070  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:19.931848  161014 type.go:168] "Request Body" body=""
	I1009 19:09:19.931941  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:19.932309  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:20.431088  161014 type.go:168] "Request Body" body=""
	I1009 19:09:20.431164  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:20.431555  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:20.931352  161014 type.go:168] "Request Body" body=""
	I1009 19:09:20.931455  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:20.931826  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:21.431728  161014 type.go:168] "Request Body" body=""
	I1009 19:09:21.431814  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:21.432175  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:21.432242  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:21.930958  161014 type.go:168] "Request Body" body=""
	I1009 19:09:21.931034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:21.931435  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:22.431185  161014 type.go:168] "Request Body" body=""
	I1009 19:09:22.431270  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:22.431704  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:22.931192  161014 type.go:168] "Request Body" body=""
	I1009 19:09:22.931273  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:22.931689  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:23.431502  161014 type.go:168] "Request Body" body=""
	I1009 19:09:23.431580  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:23.431996  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:23.930860  161014 type.go:168] "Request Body" body=""
	I1009 19:09:23.930955  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:23.931370  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:23.931478  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:24.431207  161014 type.go:168] "Request Body" body=""
	I1009 19:09:24.431286  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:24.431650  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:24.931519  161014 type.go:168] "Request Body" body=""
	I1009 19:09:24.931614  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:24.931998  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:25.431913  161014 type.go:168] "Request Body" body=""
	I1009 19:09:25.432001  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:25.432369  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:25.931194  161014 type.go:168] "Request Body" body=""
	I1009 19:09:25.931275  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:25.931711  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:25.931786  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:26.431609  161014 type.go:168] "Request Body" body=""
	I1009 19:09:26.431690  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:26.432050  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:26.931918  161014 type.go:168] "Request Body" body=""
	I1009 19:09:26.932020  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:26.932417  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:27.431187  161014 type.go:168] "Request Body" body=""
	I1009 19:09:27.431268  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:27.431666  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:27.931530  161014 type.go:168] "Request Body" body=""
	I1009 19:09:27.931614  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:27.931987  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:27.932055  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:28.431844  161014 type.go:168] "Request Body" body=""
	I1009 19:09:28.431933  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:28.432359  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:28.931165  161014 type.go:168] "Request Body" body=""
	I1009 19:09:28.931247  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:28.931680  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:29.431569  161014 type.go:168] "Request Body" body=""
	I1009 19:09:29.431650  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:29.432040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:29.931942  161014 type.go:168] "Request Body" body=""
	I1009 19:09:29.932027  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:29.932374  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:29.932460  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:30.431194  161014 type.go:168] "Request Body" body=""
	I1009 19:09:30.431282  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:30.431737  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:30.931616  161014 type.go:168] "Request Body" body=""
	I1009 19:09:30.931725  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:30.932121  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:31.430987  161014 type.go:168] "Request Body" body=""
	I1009 19:09:31.431078  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:31.431478  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:31.931232  161014 type.go:168] "Request Body" body=""
	I1009 19:09:31.931315  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:31.931680  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:32.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:09:32.431613  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:32.431992  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:32.432063  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:32.931853  161014 type.go:168] "Request Body" body=""
	I1009 19:09:32.931950  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:32.932297  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:33.431044  161014 type.go:168] "Request Body" body=""
	I1009 19:09:33.431132  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:33.431543  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:33.931355  161014 type.go:168] "Request Body" body=""
	I1009 19:09:33.931458  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:33.931818  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:34.431650  161014 type.go:168] "Request Body" body=""
	I1009 19:09:34.431733  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:34.432148  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:34.432213  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:34.930967  161014 type.go:168] "Request Body" body=""
	I1009 19:09:34.931063  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:34.931473  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:35.431283  161014 type.go:168] "Request Body" body=""
	I1009 19:09:35.431373  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:35.431779  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:35.931628  161014 type.go:168] "Request Body" body=""
	I1009 19:09:35.931736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:35.932084  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:36.430910  161014 type.go:168] "Request Body" body=""
	I1009 19:09:36.431012  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:36.431444  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:36.931340  161014 type.go:168] "Request Body" body=""
	I1009 19:09:36.931451  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:36.931825  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:36.931893  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:37.431740  161014 type.go:168] "Request Body" body=""
	I1009 19:09:37.431822  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:37.432174  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:37.931117  161014 type.go:168] "Request Body" body=""
	I1009 19:09:37.931218  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:37.931587  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:38.431359  161014 type.go:168] "Request Body" body=""
	I1009 19:09:38.431467  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:38.431870  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:38.931821  161014 type.go:168] "Request Body" body=""
	I1009 19:09:38.931902  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:38.932265  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:38.932358  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:39.431099  161014 type.go:168] "Request Body" body=""
	I1009 19:09:39.431179  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:39.431570  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:39.931428  161014 type.go:168] "Request Body" body=""
	I1009 19:09:39.931517  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:39.931883  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:40.431747  161014 type.go:168] "Request Body" body=""
	I1009 19:09:40.431830  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:40.432201  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:40.931026  161014 type.go:168] "Request Body" body=""
	I1009 19:09:40.931135  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:40.931594  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:41.431370  161014 type.go:168] "Request Body" body=""
	I1009 19:09:41.431476  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:41.431863  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:41.431951  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:41.931795  161014 type.go:168] "Request Body" body=""
	I1009 19:09:41.931873  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:41.932227  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:42.431026  161014 type.go:168] "Request Body" body=""
	I1009 19:09:42.431112  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:42.431474  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:42.931233  161014 type.go:168] "Request Body" body=""
	I1009 19:09:42.931308  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:42.931720  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:43.431625  161014 type.go:168] "Request Body" body=""
	I1009 19:09:43.431708  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:43.432076  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:43.432152  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:43.930876  161014 type.go:168] "Request Body" body=""
	I1009 19:09:43.930965  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:43.931363  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:44.431159  161014 type.go:168] "Request Body" body=""
	I1009 19:09:44.431252  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:44.431660  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:44.931539  161014 type.go:168] "Request Body" body=""
	I1009 19:09:44.931619  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:44.932022  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:45.431848  161014 type.go:168] "Request Body" body=""
	I1009 19:09:45.431928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:45.432294  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:45.432362  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:45.931071  161014 type.go:168] "Request Body" body=""
	I1009 19:09:45.931154  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:45.931550  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:46.431330  161014 type.go:168] "Request Body" body=""
	I1009 19:09:46.431433  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:46.431785  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:46.931618  161014 type.go:168] "Request Body" body=""
	I1009 19:09:46.931717  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:46.932083  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:47.430878  161014 type.go:168] "Request Body" body=""
	I1009 19:09:47.430967  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:47.431308  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:47.931113  161014 type.go:168] "Request Body" body=""
	I1009 19:09:47.931193  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:47.931575  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:47.931645  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:48.431350  161014 type.go:168] "Request Body" body=""
	I1009 19:09:48.431448  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:48.431909  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:48.931846  161014 type.go:168] "Request Body" body=""
	I1009 19:09:48.931928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:48.932292  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:49.431050  161014 type.go:168] "Request Body" body=""
	I1009 19:09:49.431125  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:49.431508  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:49.931265  161014 type.go:168] "Request Body" body=""
	I1009 19:09:49.931345  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:49.931748  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:49.931814  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:50.431652  161014 type.go:168] "Request Body" body=""
	I1009 19:09:50.431747  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:50.432126  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:50.930878  161014 type.go:168] "Request Body" body=""
	I1009 19:09:50.930959  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:50.931336  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:51.431163  161014 type.go:168] "Request Body" body=""
	I1009 19:09:51.431258  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:51.431622  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:51.931358  161014 type.go:168] "Request Body" body=""
	I1009 19:09:51.931464  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:51.931852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:51.931924  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:52.431703  161014 type.go:168] "Request Body" body=""
	I1009 19:09:52.431795  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:52.432179  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:52.930954  161014 type.go:168] "Request Body" body=""
	I1009 19:09:52.931050  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:52.931459  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:53.431224  161014 type.go:168] "Request Body" body=""
	I1009 19:09:53.431365  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:53.431740  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:53.931748  161014 type.go:168] "Request Body" body=""
	I1009 19:09:53.931831  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:53.932191  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:53.932260  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:54.430975  161014 type.go:168] "Request Body" body=""
	I1009 19:09:54.431053  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:54.431476  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:54.931249  161014 type.go:168] "Request Body" body=""
	I1009 19:09:54.931341  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:54.931729  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:55.431607  161014 type.go:168] "Request Body" body=""
	I1009 19:09:55.431691  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:55.432110  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:55.930917  161014 type.go:168] "Request Body" body=""
	I1009 19:09:55.931003  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:55.931362  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:56.431145  161014 type.go:168] "Request Body" body=""
	I1009 19:09:56.431222  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:56.431639  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:56.431710  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:56.931556  161014 type.go:168] "Request Body" body=""
	I1009 19:09:56.931656  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:56.932062  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:57.431903  161014 type.go:168] "Request Body" body=""
	I1009 19:09:57.431989  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:57.432337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:57.931402  161014 type.go:168] "Request Body" body=""
	I1009 19:09:57.931482  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:57.931843  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:58.431701  161014 type.go:168] "Request Body" body=""
	I1009 19:09:58.431790  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:58.432145  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:09:58.432218  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:09:58.931088  161014 type.go:168] "Request Body" body=""
	I1009 19:09:58.931175  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:58.931505  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:59.431298  161014 type.go:168] "Request Body" body=""
	I1009 19:09:59.431395  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:59.431751  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:09:59.931602  161014 type.go:168] "Request Body" body=""
	I1009 19:09:59.931702  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:09:59.932051  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:00.430856  161014 type.go:168] "Request Body" body=""
	I1009 19:10:00.430958  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:00.431337  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:00.931121  161014 type.go:168] "Request Body" body=""
	I1009 19:10:00.931220  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:00.931593  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:00.931674  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:01.431423  161014 type.go:168] "Request Body" body=""
	I1009 19:10:01.431509  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:01.431863  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:01.931614  161014 type.go:168] "Request Body" body=""
	I1009 19:10:01.931705  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:01.932079  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:02.430870  161014 type.go:168] "Request Body" body=""
	I1009 19:10:02.430952  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:02.431333  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:02.931135  161014 type.go:168] "Request Body" body=""
	I1009 19:10:02.931235  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:02.931629  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:02.931714  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:03.431520  161014 type.go:168] "Request Body" body=""
	I1009 19:10:03.431673  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:03.432032  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:03.930864  161014 type.go:168] "Request Body" body=""
	I1009 19:10:03.930947  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:03.931344  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:04.431204  161014 type.go:168] "Request Body" body=""
	I1009 19:10:04.431296  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:04.431704  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:04.931600  161014 type.go:168] "Request Body" body=""
	I1009 19:10:04.931678  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:04.932040  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:04.932106  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:05.430899  161014 type.go:168] "Request Body" body=""
	I1009 19:10:05.431003  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:05.431407  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:05.931195  161014 type.go:168] "Request Body" body=""
	I1009 19:10:05.931270  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:05.931635  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:06.431451  161014 type.go:168] "Request Body" body=""
	I1009 19:10:06.431534  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:06.431953  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:06.931837  161014 type.go:168] "Request Body" body=""
	I1009 19:10:06.931927  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:06.932279  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:06.932345  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:07.431076  161014 type.go:168] "Request Body" body=""
	I1009 19:10:07.431164  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:07.431506  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:07.931394  161014 type.go:168] "Request Body" body=""
	I1009 19:10:07.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:07.931835  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:08.431660  161014 type.go:168] "Request Body" body=""
	I1009 19:10:08.431741  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:08.432102  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:08.930920  161014 type.go:168] "Request Body" body=""
	I1009 19:10:08.930998  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:08.931426  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:09.431179  161014 type.go:168] "Request Body" body=""
	I1009 19:10:09.431260  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:09.431640  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:09.431713  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:09.931551  161014 type.go:168] "Request Body" body=""
	I1009 19:10:09.931636  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:09.932085  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:10.430911  161014 type.go:168] "Request Body" body=""
	I1009 19:10:10.431004  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:10.431408  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:10.931180  161014 type.go:168] "Request Body" body=""
	I1009 19:10:10.931260  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:10.931651  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:11.431533  161014 type.go:168] "Request Body" body=""
	I1009 19:10:11.431610  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:11.432017  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:11.432093  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:11.930846  161014 type.go:168] "Request Body" body=""
	I1009 19:10:11.930928  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:11.931300  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:12.431099  161014 type.go:168] "Request Body" body=""
	I1009 19:10:12.431188  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:12.431615  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:12.931577  161014 type.go:168] "Request Body" body=""
	I1009 19:10:12.931661  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:12.932029  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:13.431910  161014 type.go:168] "Request Body" body=""
	I1009 19:10:13.431995  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:13.432350  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:13.432438  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:13.931217  161014 type.go:168] "Request Body" body=""
	I1009 19:10:13.931302  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:13.931678  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:14.431548  161014 type.go:168] "Request Body" body=""
	I1009 19:10:14.431638  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:14.432110  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:14.930876  161014 type.go:168] "Request Body" body=""
	I1009 19:10:14.930963  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:14.931343  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:15.431156  161014 type.go:168] "Request Body" body=""
	I1009 19:10:15.431244  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:15.431618  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:15.931358  161014 type.go:168] "Request Body" body=""
	I1009 19:10:15.931451  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:15.931817  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:15.931883  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:16.431696  161014 type.go:168] "Request Body" body=""
	I1009 19:10:16.431794  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:16.432158  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:16.930930  161014 type.go:168] "Request Body" body=""
	I1009 19:10:16.931010  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:16.931370  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:17.431200  161014 type.go:168] "Request Body" body=""
	I1009 19:10:17.431290  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:17.431663  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:17.931525  161014 type.go:168] "Request Body" body=""
	I1009 19:10:17.931613  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:17.932012  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:17.932077  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:18.431980  161014 type.go:168] "Request Body" body=""
	I1009 19:10:18.432065  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:18.432498  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:18.931327  161014 type.go:168] "Request Body" body=""
	I1009 19:10:18.931435  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:18.931798  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:19.431649  161014 type.go:168] "Request Body" body=""
	I1009 19:10:19.431736  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:19.432133  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:19.930941  161014 type.go:168] "Request Body" body=""
	I1009 19:10:19.931034  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:19.931426  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:20.431191  161014 type.go:168] "Request Body" body=""
	I1009 19:10:20.431277  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:20.431702  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:20.431786  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:20.931649  161014 type.go:168] "Request Body" body=""
	I1009 19:10:20.931743  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:20.932145  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:21.430998  161014 type.go:168] "Request Body" body=""
	I1009 19:10:21.431093  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:21.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:21.931294  161014 type.go:168] "Request Body" body=""
	I1009 19:10:21.931404  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:21.931769  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:22.431592  161014 type.go:168] "Request Body" body=""
	I1009 19:10:22.431689  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:22.432061  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:22.432138  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:22.930890  161014 type.go:168] "Request Body" body=""
	I1009 19:10:22.930981  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:22.931355  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:23.431123  161014 type.go:168] "Request Body" body=""
	I1009 19:10:23.431202  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:23.431562  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:23.931393  161014 type.go:168] "Request Body" body=""
	I1009 19:10:23.931475  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:23.931849  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:24.431681  161014 type.go:168] "Request Body" body=""
	I1009 19:10:24.431765  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:24.432120  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:24.432200  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:24.930948  161014 type.go:168] "Request Body" body=""
	I1009 19:10:24.931038  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:24.931411  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:25.431172  161014 type.go:168] "Request Body" body=""
	I1009 19:10:25.431263  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:25.431645  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:25.931517  161014 type.go:168] "Request Body" body=""
	I1009 19:10:25.931604  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:25.931950  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:26.431795  161014 type.go:168] "Request Body" body=""
	I1009 19:10:26.431877  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:26.432259  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:26.432327  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:26.931108  161014 type.go:168] "Request Body" body=""
	I1009 19:10:26.931192  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:26.931561  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:27.431372  161014 type.go:168] "Request Body" body=""
	I1009 19:10:27.431478  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:27.431852  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:27.931767  161014 type.go:168] "Request Body" body=""
	I1009 19:10:27.931844  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:27.932243  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:28.431036  161014 type.go:168] "Request Body" body=""
	I1009 19:10:28.431114  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:28.431500  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:28.931317  161014 type.go:168] "Request Body" body=""
	I1009 19:10:28.931415  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:28.931802  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:28.931870  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:29.431682  161014 type.go:168] "Request Body" body=""
	I1009 19:10:29.431764  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:29.432158  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:29.930948  161014 type.go:168] "Request Body" body=""
	I1009 19:10:29.931029  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:29.931432  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:30.431237  161014 type.go:168] "Request Body" body=""
	I1009 19:10:30.431318  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:30.431700  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:30.931592  161014 type.go:168] "Request Body" body=""
	I1009 19:10:30.931686  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:30.932066  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:30.932138  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:31.430865  161014 type.go:168] "Request Body" body=""
	I1009 19:10:31.430944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:31.431326  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:31.931100  161014 type.go:168] "Request Body" body=""
	I1009 19:10:31.931183  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:31.931557  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:32.431408  161014 type.go:168] "Request Body" body=""
	I1009 19:10:32.431492  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:32.431860  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:32.931727  161014 type.go:168] "Request Body" body=""
	I1009 19:10:32.931827  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:32.932201  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:32.932275  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:33.431035  161014 type.go:168] "Request Body" body=""
	I1009 19:10:33.431127  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:33.431509  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:33.931347  161014 type.go:168] "Request Body" body=""
	I1009 19:10:33.931452  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:33.931805  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:34.431659  161014 type.go:168] "Request Body" body=""
	I1009 19:10:34.431767  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:34.432157  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:34.930935  161014 type.go:168] "Request Body" body=""
	I1009 19:10:34.931032  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:34.931422  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:35.431188  161014 type.go:168] "Request Body" body=""
	I1009 19:10:35.431266  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:35.431638  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:35.431700  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:35.931496  161014 type.go:168] "Request Body" body=""
	I1009 19:10:35.931583  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:35.931982  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:36.431843  161014 type.go:168] "Request Body" body=""
	I1009 19:10:36.431930  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:36.432287  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:36.931012  161014 type.go:168] "Request Body" body=""
	I1009 19:10:36.931101  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:36.931479  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:37.431254  161014 type.go:168] "Request Body" body=""
	I1009 19:10:37.431336  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:37.431708  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:37.431785  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:37.931498  161014 type.go:168] "Request Body" body=""
	I1009 19:10:37.931578  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:37.931952  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:38.431802  161014 type.go:168] "Request Body" body=""
	I1009 19:10:38.431878  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:38.432242  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:38.931094  161014 type.go:168] "Request Body" body=""
	I1009 19:10:38.931171  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:38.931535  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:39.431342  161014 type.go:168] "Request Body" body=""
	I1009 19:10:39.431467  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:39.431828  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:39.431894  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:39.931678  161014 type.go:168] "Request Body" body=""
	I1009 19:10:39.931769  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:39.932114  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:40.430894  161014 type.go:168] "Request Body" body=""
	I1009 19:10:40.431002  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:40.431338  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:40.931086  161014 type.go:168] "Request Body" body=""
	I1009 19:10:40.931169  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:40.931549  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:41.431354  161014 type.go:168] "Request Body" body=""
	I1009 19:10:41.431484  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:41.431936  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:41.432009  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:41.931856  161014 type.go:168] "Request Body" body=""
	I1009 19:10:41.931944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:41.932342  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:42.431239  161014 type.go:168] "Request Body" body=""
	I1009 19:10:42.431343  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:42.431776  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:42.931642  161014 type.go:168] "Request Body" body=""
	I1009 19:10:42.931724  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:42.932139  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:43.430955  161014 type.go:168] "Request Body" body=""
	I1009 19:10:43.431055  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:43.431482  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:43.931286  161014 type.go:168] "Request Body" body=""
	I1009 19:10:43.931364  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:43.931761  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:43.931841  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:44.431651  161014 type.go:168] "Request Body" body=""
	I1009 19:10:44.431739  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:44.432136  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:44.930918  161014 type.go:168] "Request Body" body=""
	I1009 19:10:44.930997  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:44.931368  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:45.431210  161014 type.go:168] "Request Body" body=""
	I1009 19:10:45.431301  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:45.431803  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:45.931785  161014 type.go:168] "Request Body" body=""
	I1009 19:10:45.931879  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:45.932234  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:45.932298  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:46.431044  161014 type.go:168] "Request Body" body=""
	I1009 19:10:46.431130  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:46.431509  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:46.931298  161014 type.go:168] "Request Body" body=""
	I1009 19:10:46.931409  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:46.931768  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:47.431684  161014 type.go:168] "Request Body" body=""
	I1009 19:10:47.431772  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:47.432192  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:47.930892  161014 type.go:168] "Request Body" body=""
	I1009 19:10:47.931082  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:47.931491  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:48.431254  161014 type.go:168] "Request Body" body=""
	I1009 19:10:48.431334  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:48.431754  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:48.431817  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:48.931519  161014 type.go:168] "Request Body" body=""
	I1009 19:10:48.931605  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:48.931963  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:49.431903  161014 type.go:168] "Request Body" body=""
	I1009 19:10:49.431995  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:49.432442  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:49.931216  161014 type.go:168] "Request Body" body=""
	I1009 19:10:49.931299  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:49.931685  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:50.431513  161014 type.go:168] "Request Body" body=""
	I1009 19:10:50.431600  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:50.432015  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:50.432094  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:50.931903  161014 type.go:168] "Request Body" body=""
	I1009 19:10:50.931985  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:50.932356  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:51.431151  161014 type.go:168] "Request Body" body=""
	I1009 19:10:51.431235  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:51.431691  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:51.931607  161014 type.go:168] "Request Body" body=""
	I1009 19:10:51.931704  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:51.932066  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:52.430855  161014 type.go:168] "Request Body" body=""
	I1009 19:10:52.430936  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:52.431352  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:52.931144  161014 type.go:168] "Request Body" body=""
	I1009 19:10:52.931236  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:52.931629  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:52.931694  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:53.431504  161014 type.go:168] "Request Body" body=""
	I1009 19:10:53.431592  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:53.431978  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:53.930879  161014 type.go:168] "Request Body" body=""
	I1009 19:10:53.930990  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:53.931420  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:54.431176  161014 type.go:168] "Request Body" body=""
	I1009 19:10:54.431256  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:54.431696  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:54.931517  161014 type.go:168] "Request Body" body=""
	I1009 19:10:54.931600  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:54.932006  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:54.932070  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:55.431919  161014 type.go:168] "Request Body" body=""
	I1009 19:10:55.432013  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:55.432499  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:55.931252  161014 type.go:168] "Request Body" body=""
	I1009 19:10:55.931340  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:55.931770  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:56.431601  161014 type.go:168] "Request Body" body=""
	I1009 19:10:56.431705  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:56.432063  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:56.930857  161014 type.go:168] "Request Body" body=""
	I1009 19:10:56.930944  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:56.931308  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 19:10:57.431063  161014 type.go:168] "Request Body" body=""
	I1009 19:10:57.431152  161014 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-158523" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 19:10:57.431557  161014 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 19:10:57.431627  161014 node_ready.go:55] error getting node "functional-158523" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-158523": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 19:10:57.931435  161014 type.go:168] "Request Body" body=""
	I1009 19:10:57.931520  161014 node_ready.go:38] duration metric: took 6m0.000788191s for node "functional-158523" to be "Ready" ...
	I1009 19:10:57.934316  161014 out.go:203] 
	W1009 19:10:57.935818  161014 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:10:57.935834  161014 out.go:285] * 
	W1009 19:10:57.937485  161014 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:10:57.938875  161014 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:11:08 functional-158523 crio[2962]: time="2025-10-09T19:11:08.314246588Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=104975bf-2b48-4e4d-a86e-25ef03ca74ca name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:08 functional-158523 crio[2962]: time="2025-10-09T19:11:08.615737223Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=45beb944-3390-4ed5-af26-767e709564ee name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:08 functional-158523 crio[2962]: time="2025-10-09T19:11:08.615900524Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=45beb944-3390-4ed5-af26-767e709564ee name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:08 functional-158523 crio[2962]: time="2025-10-09T19:11:08.615958367Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=45beb944-3390-4ed5-af26-767e709564ee name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.137430266Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=af9c3420-8b23-4257-8355-007a5da08d11 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.137868818Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=af9c3420-8b23-4257-8355-007a5da08d11 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.137933141Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=af9c3420-8b23-4257-8355-007a5da08d11 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.179160176Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=32a3a6df-6778-4740-bd0d-d8b7567cba27 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.179317093Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=32a3a6df-6778-4740-bd0d-d8b7567cba27 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.179349349Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=32a3a6df-6778-4740-bd0d-d8b7567cba27 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.20621085Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5a2d42a7-ded8-4fca-8f38-b709675531e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.206368283Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=5a2d42a7-ded8-4fca-8f38-b709675531e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.206431755Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=5a2d42a7-ded8-4fca-8f38-b709675531e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:09 functional-158523 crio[2962]: time="2025-10-09T19:11:09.678541017Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1415653d-2625-44ae-837a-84f84cc9d152 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:11 functional-158523 crio[2962]: time="2025-10-09T19:11:11.619517749Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=134177c8-6447-42af-a917-ea47ae8eaf9f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:11 functional-158523 crio[2962]: time="2025-10-09T19:11:11.620552146Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=302f2489-fbe9-4017-894a-3bacd39dfbad name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:11:11 functional-158523 crio[2962]: time="2025-10-09T19:11:11.621667632Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-158523/kube-apiserver" id=934a08ad-9632-4f76-9b7f-032b82f4cf79 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:11:11 functional-158523 crio[2962]: time="2025-10-09T19:11:11.621942926Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:11:11 functional-158523 crio[2962]: time="2025-10-09T19:11:11.626575692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:11:11 functional-158523 crio[2962]: time="2025-10-09T19:11:11.627173276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:11:11 functional-158523 crio[2962]: time="2025-10-09T19:11:11.647464351Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=934a08ad-9632-4f76-9b7f-032b82f4cf79 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:11:11 functional-158523 crio[2962]: time="2025-10-09T19:11:11.64906352Z" level=info msg="createCtr: deleting container ID 1caef5234df58f6c24c17a030300646fabf1e82fda31a4d9123c73552033c89a from idIndex" id=934a08ad-9632-4f76-9b7f-032b82f4cf79 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:11:11 functional-158523 crio[2962]: time="2025-10-09T19:11:11.649104235Z" level=info msg="createCtr: removing container 1caef5234df58f6c24c17a030300646fabf1e82fda31a4d9123c73552033c89a" id=934a08ad-9632-4f76-9b7f-032b82f4cf79 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:11:11 functional-158523 crio[2962]: time="2025-10-09T19:11:11.649136836Z" level=info msg="createCtr: deleting container 1caef5234df58f6c24c17a030300646fabf1e82fda31a4d9123c73552033c89a from storage" id=934a08ad-9632-4f76-9b7f-032b82f4cf79 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:11:11 functional-158523 crio[2962]: time="2025-10-09T19:11:11.651796877Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-158523_kube-system_bbd906eec6f9b7c1a1a340fc9a9fdcd1_0" id=934a08ad-9632-4f76-9b7f-032b82f4cf79 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:11:13.311479    5485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:11:13.312056    5485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:11:13.313545    5485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:11:13.314015    5485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:11:13.315127    5485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:11:13 up 53 min,  0 user,  load average: 0.36, 0.20, 9.23
	Linux functional-158523 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:11:04 functional-158523 kubelet[1810]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-158523_kube-system(08a2248bbc397b9ed927890e2073120b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:11:04 functional-158523 kubelet[1810]:  > logger="UnhandledError"
	Oct 09 19:11:04 functional-158523 kubelet[1810]: E1009 19:11:04.644333    1810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-158523" podUID="08a2248bbc397b9ed927890e2073120b"
	Oct 09 19:11:06 functional-158523 kubelet[1810]: E1009 19:11:06.310265    1810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-158523?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 19:11:06 functional-158523 kubelet[1810]: I1009 19:11:06.519623    1810 kubelet_node_status.go:75] "Attempting to register node" node="functional-158523"
	Oct 09 19:11:06 functional-158523 kubelet[1810]: E1009 19:11:06.520042    1810 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-158523"
	Oct 09 19:11:06 functional-158523 kubelet[1810]: E1009 19:11:06.593842    1810 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-158523.186ce7d3e1d25377\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-158523.186ce7d3e1d25377  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-158523,UID:functional-158523,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-158523 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-158523,},FirstTimestamp:2025-10-09 19:00:51.607794551 +0000 UTC m=+0.591054211,LastTimestamp:2025-10-09 19:00:51.609818572 +0000 UTC m=+0.593078239,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-158523,}"
	Oct 09 19:11:07 functional-158523 kubelet[1810]: E1009 19:11:07.618261    1810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:11:07 functional-158523 kubelet[1810]: E1009 19:11:07.645885    1810 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:11:07 functional-158523 kubelet[1810]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:11:07 functional-158523 kubelet[1810]:  > podSandboxID="c5f59cf39316c74dd65d2925d309cbd6e6fdc48c022b61803b3c6d8d973e588c"
	Oct 09 19:11:07 functional-158523 kubelet[1810]: E1009 19:11:07.646021    1810 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:11:07 functional-158523 kubelet[1810]:         container etcd start failed in pod etcd-functional-158523_kube-system(8f4f9df5924bbaa4e1ec7f60e6576647): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:11:07 functional-158523 kubelet[1810]:  > logger="UnhandledError"
	Oct 09 19:11:07 functional-158523 kubelet[1810]: E1009 19:11:07.646063    1810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-158523" podUID="8f4f9df5924bbaa4e1ec7f60e6576647"
	Oct 09 19:11:11 functional-158523 kubelet[1810]: E1009 19:11:11.618884    1810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:11:11 functional-158523 kubelet[1810]: E1009 19:11:11.652443    1810 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:11:11 functional-158523 kubelet[1810]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:11:11 functional-158523 kubelet[1810]:  > podSandboxID="e6a4bc1b2df9d751888af8288e7c4c569afb0335567fe2f74c173dbe4e47f513"
	Oct 09 19:11:11 functional-158523 kubelet[1810]: E1009 19:11:11.652555    1810 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:11:11 functional-158523 kubelet[1810]:         container kube-apiserver start failed in pod kube-apiserver-functional-158523_kube-system(bbd906eec6f9b7c1a1a340fc9a9fdcd1): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:11:11 functional-158523 kubelet[1810]:  > logger="UnhandledError"
	Oct 09 19:11:11 functional-158523 kubelet[1810]: E1009 19:11:11.652600    1810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-158523" podUID="bbd906eec6f9b7c1a1a340fc9a9fdcd1"
	Oct 09 19:11:11 functional-158523 kubelet[1810]: E1009 19:11:11.657661    1810 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-158523\" not found"
	Oct 09 19:11:13 functional-158523 kubelet[1810]: E1009 19:11:13.311515    1810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-158523?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523: exit status 2 (307.036863ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-158523" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (737.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-158523 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-158523 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (12m15.110370198s)

                                                
                                                
-- stdout --
	* [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-158523" primary control-plane node in "functional-158523" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001072895s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000416121s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000591031s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000888179s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001082703s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000814983s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001138726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001237075s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001082703s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000814983s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001138726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001237075s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-158523 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 12m15.112841911s for "functional-158523" cluster.
I1009 19:23:29.223621  141519 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-158523
helpers_test.go:243: (dbg) docker inspect functional-158523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	        "Created": "2025-10-09T18:56:39.519997973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:56:39.555178961Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hosts",
	        "LogPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4-json.log",
	        "Name": "/functional-158523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-158523:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-158523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	                "LowerDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-158523",
	                "Source": "/var/lib/docker/volumes/functional-158523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-158523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-158523",
	                "name.minikube.sigs.k8s.io": "functional-158523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50f61b8c99f766763e9e30d37584e537ddf10f74215a382a4221cbe7f7c2e821",
	            "SandboxKey": "/var/run/docker/netns/50f61b8c99f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-158523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:a1:34:24:78:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6347eb7b6b2842e266f51d3873c8aa169a4287ea52510c3d62dc4ed41012963c",
	                    "EndpointID": "eb1ada23cbc058d52037dd94574627dd1b0fedf455f291e656f050bf06881952",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-158523",
	                        "dea3735c567c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523: exit status 2 (313.662354ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 logs -n 25
helpers_test.go:260: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                            │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                            │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                            │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                               │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                               │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                               │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ delete  │ -p nospam-656427                                                                                              │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p functional-158523 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ -p functional-158523 --alsologtostderr -v=8                                                                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ cache   │ functional-158523 cache add registry.k8s.io/pause:3.1                                                         │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache add registry.k8s.io/pause:3.3                                                         │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache add registry.k8s.io/pause:latest                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache add minikube-local-cache-test:functional-158523                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache delete minikube-local-cache-test:functional-158523                                    │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl images                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	│ cache   │ functional-158523 cache reload                                                                                │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ kubectl │ functional-158523 kubectl -- --context functional-158523 get pods                                             │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	│ start   │ -p functional-158523 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:11:14
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:11:14.157038  167468 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:11:14.157144  167468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:11:14.157147  167468 out.go:374] Setting ErrFile to fd 2...
	I1009 19:11:14.157150  167468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:11:14.157397  167468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:11:14.157856  167468 out.go:368] Setting JSON to false
	I1009 19:11:14.158722  167468 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3223,"bootTime":1760033851,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:11:14.158807  167468 start.go:143] virtualization: kvm guest
	I1009 19:11:14.160952  167468 out.go:179] * [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:11:14.162586  167468 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:11:14.162634  167468 notify.go:221] Checking for updates...
	I1009 19:11:14.165525  167468 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:11:14.166942  167468 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:11:14.170608  167468 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:11:14.171837  167468 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:11:14.173196  167468 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:11:14.175072  167468 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:11:14.175208  167468 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:11:14.203136  167468 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:11:14.203286  167468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:11:14.264483  167468 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-09 19:11:14.254475753 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:11:14.264582  167468 docker.go:319] overlay module found
	I1009 19:11:14.266408  167468 out.go:179] * Using the docker driver based on existing profile
	I1009 19:11:14.267558  167468 start.go:309] selected driver: docker
	I1009 19:11:14.267564  167468 start.go:930] validating driver "docker" against &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:11:14.267655  167468 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:11:14.267744  167468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:11:14.329654  167468 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-09 19:11:14.319992483 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:11:14.330205  167468 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:11:14.330223  167468 cni.go:84] Creating CNI manager for ""
	I1009 19:11:14.330253  167468 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:11:14.330287  167468 start.go:353] cluster config:
	{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:11:14.332505  167468 out.go:179] * Starting "functional-158523" primary control-plane node in "functional-158523" cluster
	I1009 19:11:14.334058  167468 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:11:14.335345  167468 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:11:14.336493  167468 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:11:14.336527  167468 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:11:14.336536  167468 cache.go:58] Caching tarball of preloaded images
	I1009 19:11:14.336602  167468 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:11:14.336625  167468 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:11:14.336631  167468 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:11:14.336732  167468 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/config.json ...
	I1009 19:11:14.356941  167468 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:11:14.356956  167468 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:11:14.356970  167468 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:11:14.356995  167468 start.go:361] acquireMachinesLock for functional-158523: {Name:mk995713bbd40419f859c4a8640c8ada0479020c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:11:14.357048  167468 start.go:365] duration metric: took 38.867µs to acquireMachinesLock for "functional-158523"
	I1009 19:11:14.357061  167468 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:11:14.357066  167468 fix.go:55] fixHost starting: 
	I1009 19:11:14.357257  167468 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:11:14.373853  167468 fix.go:113] recreateIfNeeded on functional-158523: state=Running err=<nil>
	W1009 19:11:14.373882  167468 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:11:14.375583  167468 out.go:252] * Updating the running docker "functional-158523" container ...
	I1009 19:11:14.375606  167468 machine.go:93] provisionDockerMachine start ...
	I1009 19:11:14.375672  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:14.393133  167468 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:14.393345  167468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:11:14.393352  167468 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:11:14.538696  167468 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 19:11:14.538716  167468 ubuntu.go:182] provisioning hostname "functional-158523"
	I1009 19:11:14.538785  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:14.557084  167468 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:14.557356  167468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:11:14.557367  167468 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-158523 && echo "functional-158523" | sudo tee /etc/hostname
	I1009 19:11:14.713522  167468 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 19:11:14.713596  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:14.731559  167468 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:14.731842  167468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:11:14.731856  167468 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-158523' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-158523/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-158523' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:11:14.877193  167468 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:14.877220  167468 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:11:14.877247  167468 ubuntu.go:190] setting up certificates
	I1009 19:11:14.877258  167468 provision.go:84] configureAuth start
	I1009 19:11:14.877334  167468 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 19:11:14.894643  167468 provision.go:143] copyHostCerts
	I1009 19:11:14.894694  167468 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:11:14.894709  167468 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:11:14.894773  167468 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:11:14.894862  167468 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:11:14.894865  167468 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:11:14.894889  167468 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:11:14.894937  167468 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:11:14.894940  167468 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:11:14.894959  167468 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:11:14.895003  167468 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.functional-158523 san=[127.0.0.1 192.168.49.2 functional-158523 localhost minikube]
	I1009 19:11:15.233918  167468 provision.go:177] copyRemoteCerts
	I1009 19:11:15.233967  167468 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:11:15.234007  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:15.251853  167468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:11:15.355329  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:11:15.374955  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:11:15.393475  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:11:15.412247  167468 provision.go:87] duration metric: took 534.974389ms to configureAuth
	I1009 19:11:15.412267  167468 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:11:15.412477  167468 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:11:15.412594  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:15.430627  167468 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:15.430837  167468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:11:15.430849  167468 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:11:15.707832  167468 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:11:15.707848  167468 machine.go:96] duration metric: took 1.33223564s to provisionDockerMachine
	I1009 19:11:15.707858  167468 start.go:294] postStartSetup for "functional-158523" (driver="docker")
	I1009 19:11:15.707868  167468 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:11:15.707919  167468 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:11:15.707980  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:15.725705  167468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:11:15.827905  167468 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:11:15.831650  167468 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:11:15.831668  167468 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:11:15.831679  167468 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:11:15.831740  167468 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:11:15.831815  167468 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:11:15.831878  167468 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts -> hosts in /etc/test/nested/copy/141519
	I1009 19:11:15.831909  167468 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/141519
	I1009 19:11:15.839531  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:11:15.857737  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts --> /etc/test/nested/copy/141519/hosts (40 bytes)
	I1009 19:11:15.875073  167468 start.go:297] duration metric: took 167.196866ms for postStartSetup
	I1009 19:11:15.875151  167468 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:11:15.875185  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:15.893217  167468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:11:15.993724  167468 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:11:15.998524  167468 fix.go:57] duration metric: took 1.641448896s for fixHost
	I1009 19:11:15.998548  167468 start.go:84] releasing machines lock for "functional-158523", held for 1.641493243s
	I1009 19:11:15.998615  167468 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 19:11:16.017075  167468 ssh_runner.go:195] Run: cat /version.json
	I1009 19:11:16.017091  167468 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:11:16.017114  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:16.017144  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:16.036046  167468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:11:16.036330  167468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:11:16.188713  167468 ssh_runner.go:195] Run: systemctl --version
	I1009 19:11:16.196168  167468 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:11:16.231948  167468 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:11:16.236768  167468 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:11:16.236819  167468 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:11:16.245113  167468 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:11:16.245131  167468 start.go:496] detecting cgroup driver to use...
	I1009 19:11:16.245167  167468 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:11:16.245211  167468 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:11:16.259663  167468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:11:16.272373  167468 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:11:16.272435  167468 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:11:16.287252  167468 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:11:16.299952  167468 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:11:16.392105  167468 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:11:16.479823  167468 docker.go:234] disabling docker service ...
	I1009 19:11:16.479877  167468 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:11:16.494456  167468 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:11:16.507602  167468 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:11:16.592867  167468 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:11:16.683605  167468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:11:16.710180  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:11:16.725165  167468 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:11:16.725208  167468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:16.734043  167468 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:11:16.734092  167468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:16.743004  167468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:16.751778  167468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:16.760817  167468 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:11:16.768978  167468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:16.778147  167468 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:16.786486  167468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:16.795315  167468 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:11:16.802691  167468 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:11:16.809903  167468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:16.905667  167468 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:11:17.020220  167468 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:11:17.020286  167468 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:11:17.024261  167468 start.go:564] Will wait 60s for crictl version
	I1009 19:11:17.024305  167468 ssh_runner.go:195] Run: which crictl
	I1009 19:11:17.027760  167468 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:11:17.051881  167468 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:11:17.051942  167468 ssh_runner.go:195] Run: crio --version
	I1009 19:11:17.080716  167468 ssh_runner.go:195] Run: crio --version
	I1009 19:11:17.111432  167468 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:11:17.112945  167468 cli_runner.go:164] Run: docker network inspect functional-158523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:11:17.130349  167468 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:11:17.136436  167468 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1009 19:11:17.137696  167468 kubeadm.go:883] updating cluster {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:11:17.137806  167468 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:11:17.137860  167468 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:11:17.174863  167468 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:11:17.174875  167468 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:11:17.174927  167468 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:11:17.201355  167468 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:11:17.201367  167468 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:11:17.201372  167468 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 19:11:17.201491  167468 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-158523 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:11:17.201558  167468 ssh_runner.go:195] Run: crio config
	I1009 19:11:17.248070  167468 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1009 19:11:17.248092  167468 cni.go:84] Creating CNI manager for ""
	I1009 19:11:17.248099  167468 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:11:17.248108  167468 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:11:17.248129  167468 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-158523 NodeName:functional-158523 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:11:17.248244  167468 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-158523"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:11:17.248301  167468 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:11:17.256659  167468 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:11:17.256725  167468 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:11:17.265104  167468 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 19:11:17.278149  167468 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:11:17.291161  167468 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1009 19:11:17.304170  167468 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:11:17.308091  167468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:17.393652  167468 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:11:17.406930  167468 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523 for IP: 192.168.49.2
	I1009 19:11:17.406944  167468 certs.go:195] generating shared ca certs ...
	I1009 19:11:17.406959  167468 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:17.407115  167468 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:11:17.407147  167468 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:11:17.407152  167468 certs.go:257] generating profile certs ...
	I1009 19:11:17.407227  167468 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key
	I1009 19:11:17.407261  167468 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key.1809350a
	I1009 19:11:17.407289  167468 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key
	I1009 19:11:17.407430  167468 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:11:17.407466  167468 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:11:17.407475  167468 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:11:17.407500  167468 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:11:17.407523  167468 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:11:17.407548  167468 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:11:17.407584  167468 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:11:17.408210  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:11:17.427246  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:11:17.445339  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:11:17.462828  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:11:17.480653  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:11:17.499524  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:11:17.518652  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:11:17.536330  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:11:17.554544  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:11:17.572216  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:11:17.589806  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:11:17.607162  167468 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:11:17.619605  167468 ssh_runner.go:195] Run: openssl version
	I1009 19:11:17.625893  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:11:17.634967  167468 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:11:17.638971  167468 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:11:17.639017  167468 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:11:17.673097  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:11:17.681781  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:11:17.690510  167468 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:17.694244  167468 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:17.694287  167468 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:17.728858  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:11:17.737406  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:11:17.746208  167468 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:11:17.749994  167468 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:11:17.750054  167468 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:11:17.784891  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:11:17.793493  167468 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:11:17.797539  167468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:11:17.833179  167468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:11:17.867879  167468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:11:17.902538  167468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:11:17.937115  167468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:11:17.972083  167468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:11:18.007424  167468 kubeadm.go:400] StartCluster: {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:11:18.007509  167468 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:11:18.007561  167468 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:11:18.035547  167468 cri.go:89] found id: ""
	I1009 19:11:18.035607  167468 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:11:18.043904  167468 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:11:18.043917  167468 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:11:18.043958  167468 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:11:18.051515  167468 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:11:18.052124  167468 kubeconfig.go:125] found "functional-158523" server: "https://192.168.49.2:8441"
	I1009 19:11:18.053652  167468 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:11:18.061973  167468 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-09 18:56:43.847270831 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-09 19:11:17.301680145 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1009 19:11:18.061997  167468 kubeadm.go:1160] stopping kube-system containers ...
	I1009 19:11:18.062011  167468 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 19:11:18.062062  167468 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:11:18.090233  167468 cri.go:89] found id: ""
	I1009 19:11:18.090298  167468 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 19:11:18.135227  167468 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:11:18.143667  167468 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5623 Oct  9 19:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  9 19:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  9 19:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  9 19:00 /etc/kubernetes/scheduler.conf
	
	I1009 19:11:18.143727  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 19:11:18.151903  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 19:11:18.160031  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:11:18.160092  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:11:18.167823  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 19:11:18.175748  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:11:18.175802  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:11:18.184016  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 19:11:18.192107  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:11:18.192164  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:11:18.199911  167468 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:11:18.208125  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:11:18.251392  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:11:19.844491  167468 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.593070913s)
	I1009 19:11:19.844554  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:11:20.007259  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:11:20.056142  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:11:20.106149  167468 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:11:20.106217  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:20.607128  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:21.106506  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:21.607044  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:22.106495  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:22.607290  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:23.107176  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:23.606512  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:24.106477  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:24.607120  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:25.106702  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:25.606496  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:26.107306  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:26.606426  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:27.107156  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:27.606967  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:28.106986  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:28.607360  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:29.106501  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:29.606699  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:30.106988  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:30.606751  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:31.106573  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:31.607271  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:32.107154  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:32.606611  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:33.107242  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:33.607016  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:34.106535  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:34.606754  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:35.107301  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:35.607266  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:36.106318  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:36.606315  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:37.107176  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:37.607281  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:38.106732  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:38.607122  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:39.106818  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:39.606784  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:40.107197  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:40.606991  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:41.107011  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:41.606339  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:42.106963  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:42.606555  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:43.107219  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:43.607105  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:44.106424  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:44.607215  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:45.106602  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:45.607006  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:46.106815  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:46.607280  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:47.106629  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:47.606477  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:48.107415  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:48.607339  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:49.106605  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:49.606757  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:50.106615  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:50.606311  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:51.106589  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:51.606462  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:52.106410  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:52.606644  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:53.106820  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:53.606821  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:54.107031  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:54.607139  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:55.106783  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:55.606601  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:56.107299  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:56.606277  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:57.107229  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:57.606479  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:58.106431  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:58.607303  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:59.107050  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:59.607125  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:00.106731  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:00.606499  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:01.107084  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:01.606814  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:02.106487  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:02.607319  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:03.106362  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:03.606446  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:04.106944  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:04.606981  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:05.106694  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:05.607165  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:06.107147  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:06.607010  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:07.106545  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:07.606527  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:08.106534  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:08.606518  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:09.106332  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:09.607203  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:10.106316  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:10.607212  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:11.107324  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:11.606853  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:12.106689  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:12.607269  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:13.107123  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:13.607171  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:14.107276  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:14.607287  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:15.106491  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:15.606605  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:16.106363  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:16.607071  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:17.106663  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:17.607071  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:18.106932  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:18.607123  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:19.106860  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:19.606746  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:20.107336  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:20.107457  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:20.136348  167468 cri.go:89] found id: ""
	I1009 19:12:20.136367  167468 logs.go:282] 0 containers: []
	W1009 19:12:20.136387  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:20.136398  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:20.136460  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:20.164454  167468 cri.go:89] found id: ""
	I1009 19:12:20.164472  167468 logs.go:282] 0 containers: []
	W1009 19:12:20.164480  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:20.164495  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:20.164552  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:20.191751  167468 cri.go:89] found id: ""
	I1009 19:12:20.191768  167468 logs.go:282] 0 containers: []
	W1009 19:12:20.191775  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:20.191780  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:20.191832  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:20.220093  167468 cri.go:89] found id: ""
	I1009 19:12:20.220110  167468 logs.go:282] 0 containers: []
	W1009 19:12:20.220117  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:20.220122  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:20.220167  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:20.247873  167468 cri.go:89] found id: ""
	I1009 19:12:20.247891  167468 logs.go:282] 0 containers: []
	W1009 19:12:20.247898  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:20.247903  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:20.247956  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:20.276291  167468 cri.go:89] found id: ""
	I1009 19:12:20.276308  167468 logs.go:282] 0 containers: []
	W1009 19:12:20.276315  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:20.276320  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:20.276367  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:20.303968  167468 cri.go:89] found id: ""
	I1009 19:12:20.303987  167468 logs.go:282] 0 containers: []
	W1009 19:12:20.303997  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:20.304008  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:20.304021  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:20.364492  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:20.356948    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.357523    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.359155    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.359653    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.360925    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:20.356948    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.357523    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.359155    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.359653    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.360925    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:20.364503  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:20.364517  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:20.425746  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:20.425770  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:20.456006  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:20.456025  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:20.527929  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:20.527953  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:23.042459  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:23.053621  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:23.053687  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:23.081180  167468 cri.go:89] found id: ""
	I1009 19:12:23.081199  167468 logs.go:282] 0 containers: []
	W1009 19:12:23.081209  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:23.081217  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:23.081270  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:23.110039  167468 cri.go:89] found id: ""
	I1009 19:12:23.110059  167468 logs.go:282] 0 containers: []
	W1009 19:12:23.110068  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:23.110076  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:23.110137  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:23.138162  167468 cri.go:89] found id: ""
	I1009 19:12:23.138179  167468 logs.go:282] 0 containers: []
	W1009 19:12:23.138185  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:23.138190  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:23.138239  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:23.164707  167468 cri.go:89] found id: ""
	I1009 19:12:23.164724  167468 logs.go:282] 0 containers: []
	W1009 19:12:23.164731  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:23.164736  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:23.164789  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:23.192945  167468 cri.go:89] found id: ""
	I1009 19:12:23.192961  167468 logs.go:282] 0 containers: []
	W1009 19:12:23.192968  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:23.192973  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:23.193032  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:23.220315  167468 cri.go:89] found id: ""
	I1009 19:12:23.220332  167468 logs.go:282] 0 containers: []
	W1009 19:12:23.220339  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:23.220344  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:23.220426  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:23.247691  167468 cri.go:89] found id: ""
	I1009 19:12:23.247708  167468 logs.go:282] 0 containers: []
	W1009 19:12:23.247716  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:23.247727  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:23.247740  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:23.312625  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:23.312649  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:23.345619  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:23.345635  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:23.414184  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:23.414206  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:23.426948  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:23.426967  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:23.487448  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:23.479417    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.480019    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.481685    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.482253    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.483804    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:23.479417    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.480019    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.481685    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.482253    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.483804    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:25.989194  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:26.000187  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:26.000258  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:26.026910  167468 cri.go:89] found id: ""
	I1009 19:12:26.026929  167468 logs.go:282] 0 containers: []
	W1009 19:12:26.026936  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:26.026942  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:26.026993  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:26.054273  167468 cri.go:89] found id: ""
	I1009 19:12:26.054290  167468 logs.go:282] 0 containers: []
	W1009 19:12:26.054296  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:26.054303  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:26.054347  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:26.082937  167468 cri.go:89] found id: ""
	I1009 19:12:26.082953  167468 logs.go:282] 0 containers: []
	W1009 19:12:26.082960  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:26.082965  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:26.083013  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:26.111657  167468 cri.go:89] found id: ""
	I1009 19:12:26.111674  167468 logs.go:282] 0 containers: []
	W1009 19:12:26.111681  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:26.111686  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:26.111744  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:26.138168  167468 cri.go:89] found id: ""
	I1009 19:12:26.138183  167468 logs.go:282] 0 containers: []
	W1009 19:12:26.138190  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:26.138212  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:26.138261  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:26.165234  167468 cri.go:89] found id: ""
	I1009 19:12:26.165258  167468 logs.go:282] 0 containers: []
	W1009 19:12:26.165267  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:26.165274  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:26.165340  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:26.193467  167468 cri.go:89] found id: ""
	I1009 19:12:26.193486  167468 logs.go:282] 0 containers: []
	W1009 19:12:26.193493  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:26.193503  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:26.193520  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:26.252945  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:26.245540    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.246126    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.247768    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.248210    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.249337    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:26.245540    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.246126    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.247768    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.248210    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.249337    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:26.252967  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:26.252981  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:26.318494  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:26.318518  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:26.349406  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:26.349428  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:26.417386  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:26.417411  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:28.930653  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:28.942481  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:28.942531  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:28.970321  167468 cri.go:89] found id: ""
	I1009 19:12:28.970338  167468 logs.go:282] 0 containers: []
	W1009 19:12:28.970344  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:28.970349  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:28.970413  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:28.996510  167468 cri.go:89] found id: ""
	I1009 19:12:28.996530  167468 logs.go:282] 0 containers: []
	W1009 19:12:28.996539  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:28.996545  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:28.996600  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:29.023259  167468 cri.go:89] found id: ""
	I1009 19:12:29.023277  167468 logs.go:282] 0 containers: []
	W1009 19:12:29.023285  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:29.023292  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:29.023344  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:29.050560  167468 cri.go:89] found id: ""
	I1009 19:12:29.050575  167468 logs.go:282] 0 containers: []
	W1009 19:12:29.050581  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:29.050585  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:29.050640  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:29.078006  167468 cri.go:89] found id: ""
	I1009 19:12:29.078024  167468 logs.go:282] 0 containers: []
	W1009 19:12:29.078031  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:29.078036  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:29.078091  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:29.105506  167468 cri.go:89] found id: ""
	I1009 19:12:29.105523  167468 logs.go:282] 0 containers: []
	W1009 19:12:29.105536  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:29.105541  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:29.105588  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:29.133781  167468 cri.go:89] found id: ""
	I1009 19:12:29.133798  167468 logs.go:282] 0 containers: []
	W1009 19:12:29.133804  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:29.133814  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:29.133828  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:29.164882  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:29.164903  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:29.231999  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:29.232023  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:29.244260  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:29.244278  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:29.302021  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:29.294502    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.295049    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.296660    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.297108    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.298657    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:29.294502    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.295049    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.296660    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.297108    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.298657    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:29.302038  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:29.302057  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:31.867896  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:31.879240  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:31.879294  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:31.905896  167468 cri.go:89] found id: ""
	I1009 19:12:31.905931  167468 logs.go:282] 0 containers: []
	W1009 19:12:31.905941  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:31.905947  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:31.906003  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:31.933637  167468 cri.go:89] found id: ""
	I1009 19:12:31.933653  167468 logs.go:282] 0 containers: []
	W1009 19:12:31.933660  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:31.933670  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:31.933724  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:31.961480  167468 cri.go:89] found id: ""
	I1009 19:12:31.961497  167468 logs.go:282] 0 containers: []
	W1009 19:12:31.961504  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:31.961509  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:31.961566  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:31.988032  167468 cri.go:89] found id: ""
	I1009 19:12:31.988049  167468 logs.go:282] 0 containers: []
	W1009 19:12:31.988056  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:31.988062  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:31.988112  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:32.015108  167468 cri.go:89] found id: ""
	I1009 19:12:32.015124  167468 logs.go:282] 0 containers: []
	W1009 19:12:32.015131  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:32.015136  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:32.015184  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:32.041897  167468 cri.go:89] found id: ""
	I1009 19:12:32.041922  167468 logs.go:282] 0 containers: []
	W1009 19:12:32.041929  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:32.041934  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:32.041979  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:32.068763  167468 cri.go:89] found id: ""
	I1009 19:12:32.068780  167468 logs.go:282] 0 containers: []
	W1009 19:12:32.068788  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:32.068797  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:32.068808  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:32.139869  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:32.139894  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:32.152815  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:32.152832  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:32.210942  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:32.203063    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.203597    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.205243    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.205744    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.207268    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:32.203063    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.203597    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.205243    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.205744    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.207268    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:32.210963  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:32.210977  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:32.276761  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:32.276783  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:34.810074  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:34.821837  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:34.821902  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:34.849063  167468 cri.go:89] found id: ""
	I1009 19:12:34.849080  167468 logs.go:282] 0 containers: []
	W1009 19:12:34.849089  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:34.849099  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:34.849166  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:34.877410  167468 cri.go:89] found id: ""
	I1009 19:12:34.877428  167468 logs.go:282] 0 containers: []
	W1009 19:12:34.877437  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:34.877443  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:34.877522  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:34.906363  167468 cri.go:89] found id: ""
	I1009 19:12:34.906395  167468 logs.go:282] 0 containers: []
	W1009 19:12:34.906410  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:34.906417  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:34.906466  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:34.935845  167468 cri.go:89] found id: ""
	I1009 19:12:34.935864  167468 logs.go:282] 0 containers: []
	W1009 19:12:34.935872  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:34.935877  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:34.935931  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:34.963735  167468 cri.go:89] found id: ""
	I1009 19:12:34.963755  167468 logs.go:282] 0 containers: []
	W1009 19:12:34.963765  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:34.963771  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:34.963827  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:34.991843  167468 cri.go:89] found id: ""
	I1009 19:12:34.991858  167468 logs.go:282] 0 containers: []
	W1009 19:12:34.991864  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:34.991869  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:34.991916  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:35.018519  167468 cri.go:89] found id: ""
	I1009 19:12:35.018536  167468 logs.go:282] 0 containers: []
	W1009 19:12:35.018544  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:35.018555  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:35.018567  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:35.047474  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:35.047494  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:35.115632  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:35.115655  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:35.128101  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:35.128120  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:35.188265  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:35.180353    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.181068    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.182692    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.183163    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.184740    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:35.180353    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.181068    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.182692    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.183163    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.184740    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:35.188276  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:35.188286  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:37.755993  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:37.767167  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:37.767221  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:37.794066  167468 cri.go:89] found id: ""
	I1009 19:12:37.794082  167468 logs.go:282] 0 containers: []
	W1009 19:12:37.794089  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:37.794095  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:37.794146  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:37.822922  167468 cri.go:89] found id: ""
	I1009 19:12:37.822938  167468 logs.go:282] 0 containers: []
	W1009 19:12:37.822944  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:37.822949  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:37.823009  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:37.850138  167468 cri.go:89] found id: ""
	I1009 19:12:37.850157  167468 logs.go:282] 0 containers: []
	W1009 19:12:37.850164  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:37.850170  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:37.850221  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:37.878740  167468 cri.go:89] found id: ""
	I1009 19:12:37.878767  167468 logs.go:282] 0 containers: []
	W1009 19:12:37.878774  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:37.878779  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:37.878831  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:37.906691  167468 cri.go:89] found id: ""
	I1009 19:12:37.906709  167468 logs.go:282] 0 containers: []
	W1009 19:12:37.906719  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:37.906725  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:37.906787  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:37.935304  167468 cri.go:89] found id: ""
	I1009 19:12:37.935423  167468 logs.go:282] 0 containers: []
	W1009 19:12:37.935437  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:37.935446  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:37.935516  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:37.962029  167468 cri.go:89] found id: ""
	I1009 19:12:37.962050  167468 logs.go:282] 0 containers: []
	W1009 19:12:37.962060  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:37.962070  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:37.962085  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:38.021180  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:38.013500    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.014003    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.015677    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.016220    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.017804    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:38.013500    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.014003    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.015677    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.016220    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.017804    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:38.021190  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:38.021201  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:38.087907  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:38.087937  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:38.121749  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:38.121769  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:38.190423  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:38.190452  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:40.704051  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:40.715312  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:40.715363  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:40.742832  167468 cri.go:89] found id: ""
	I1009 19:12:40.742849  167468 logs.go:282] 0 containers: []
	W1009 19:12:40.742858  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:40.742864  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:40.742936  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:40.769708  167468 cri.go:89] found id: ""
	I1009 19:12:40.769729  167468 logs.go:282] 0 containers: []
	W1009 19:12:40.769740  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:40.769746  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:40.769803  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:40.796560  167468 cri.go:89] found id: ""
	I1009 19:12:40.796579  167468 logs.go:282] 0 containers: []
	W1009 19:12:40.796589  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:40.796595  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:40.796660  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:40.823161  167468 cri.go:89] found id: ""
	I1009 19:12:40.823182  167468 logs.go:282] 0 containers: []
	W1009 19:12:40.823189  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:40.823197  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:40.823268  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:40.851120  167468 cri.go:89] found id: ""
	I1009 19:12:40.851138  167468 logs.go:282] 0 containers: []
	W1009 19:12:40.851144  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:40.851149  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:40.851197  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:40.876852  167468 cri.go:89] found id: ""
	I1009 19:12:40.876867  167468 logs.go:282] 0 containers: []
	W1009 19:12:40.876873  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:40.876879  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:40.876927  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:40.904162  167468 cri.go:89] found id: ""
	I1009 19:12:40.904177  167468 logs.go:282] 0 containers: []
	W1009 19:12:40.904184  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:40.904193  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:40.904210  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:40.962776  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:40.955114    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.955608    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.957139    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.957571    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.959161    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:40.955114    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.955608    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.957139    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.957571    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.959161    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:40.962793  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:40.962807  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:41.024362  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:41.024397  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:41.054697  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:41.054715  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:41.129584  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:41.129608  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:43.644081  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:43.655800  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:43.655864  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:43.685781  167468 cri.go:89] found id: ""
	I1009 19:12:43.685798  167468 logs.go:282] 0 containers: []
	W1009 19:12:43.685805  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:43.685811  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:43.685857  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:43.713359  167468 cri.go:89] found id: ""
	I1009 19:12:43.713375  167468 logs.go:282] 0 containers: []
	W1009 19:12:43.713396  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:43.713402  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:43.713451  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:43.740718  167468 cri.go:89] found id: ""
	I1009 19:12:43.740736  167468 logs.go:282] 0 containers: []
	W1009 19:12:43.740743  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:43.740750  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:43.740798  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:43.769427  167468 cri.go:89] found id: ""
	I1009 19:12:43.769443  167468 logs.go:282] 0 containers: []
	W1009 19:12:43.769450  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:43.769455  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:43.769517  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:43.797878  167468 cri.go:89] found id: ""
	I1009 19:12:43.797899  167468 logs.go:282] 0 containers: []
	W1009 19:12:43.797907  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:43.797912  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:43.797968  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:43.825547  167468 cri.go:89] found id: ""
	I1009 19:12:43.825564  167468 logs.go:282] 0 containers: []
	W1009 19:12:43.825570  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:43.825576  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:43.825625  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:43.854019  167468 cri.go:89] found id: ""
	I1009 19:12:43.854039  167468 logs.go:282] 0 containers: []
	W1009 19:12:43.854049  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:43.854060  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:43.854074  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:43.884227  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:43.884245  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:43.951690  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:43.951714  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:43.963786  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:43.963804  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:44.021147  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:44.013190    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.013778    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.015326    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.015859    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.017425    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:44.013190    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.013778    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.015326    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.015859    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.017425    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:44.021159  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:44.021171  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:46.585684  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:46.596993  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:46.597044  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:46.623772  167468 cri.go:89] found id: ""
	I1009 19:12:46.623793  167468 logs.go:282] 0 containers: []
	W1009 19:12:46.623800  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:46.623806  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:46.623856  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:46.652707  167468 cri.go:89] found id: ""
	I1009 19:12:46.652724  167468 logs.go:282] 0 containers: []
	W1009 19:12:46.652730  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:46.652736  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:46.652804  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:46.680752  167468 cri.go:89] found id: ""
	I1009 19:12:46.680770  167468 logs.go:282] 0 containers: []
	W1009 19:12:46.680780  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:46.680786  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:46.680849  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:46.708720  167468 cri.go:89] found id: ""
	I1009 19:12:46.708737  167468 logs.go:282] 0 containers: []
	W1009 19:12:46.708744  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:46.708750  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:46.708798  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:46.734857  167468 cri.go:89] found id: ""
	I1009 19:12:46.734873  167468 logs.go:282] 0 containers: []
	W1009 19:12:46.734880  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:46.734885  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:46.734930  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:46.762094  167468 cri.go:89] found id: ""
	I1009 19:12:46.762113  167468 logs.go:282] 0 containers: []
	W1009 19:12:46.762126  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:46.762133  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:46.762191  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:46.789680  167468 cri.go:89] found id: ""
	I1009 19:12:46.789700  167468 logs.go:282] 0 containers: []
	W1009 19:12:46.789708  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:46.789717  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:46.789728  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:46.861689  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:46.861711  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:46.874752  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:46.874775  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:46.934669  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:46.926336    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.926983    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.929273    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.929845    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.931396    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:46.926336    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.926983    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.929273    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.929845    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.931396    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:46.934679  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:46.934688  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:46.995061  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:46.995084  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:49.527642  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:49.538773  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:49.538828  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:49.566556  167468 cri.go:89] found id: ""
	I1009 19:12:49.566573  167468 logs.go:282] 0 containers: []
	W1009 19:12:49.566579  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:49.566584  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:49.566631  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:49.594280  167468 cri.go:89] found id: ""
	I1009 19:12:49.594297  167468 logs.go:282] 0 containers: []
	W1009 19:12:49.594304  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:49.594308  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:49.594360  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:49.622099  167468 cri.go:89] found id: ""
	I1009 19:12:49.622115  167468 logs.go:282] 0 containers: []
	W1009 19:12:49.622122  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:49.622127  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:49.622173  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:49.648411  167468 cri.go:89] found id: ""
	I1009 19:12:49.648430  167468 logs.go:282] 0 containers: []
	W1009 19:12:49.648437  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:49.648442  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:49.648506  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:49.676244  167468 cri.go:89] found id: ""
	I1009 19:12:49.676260  167468 logs.go:282] 0 containers: []
	W1009 19:12:49.676266  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:49.676272  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:49.676320  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:49.703539  167468 cri.go:89] found id: ""
	I1009 19:12:49.703555  167468 logs.go:282] 0 containers: []
	W1009 19:12:49.703562  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:49.703567  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:49.703617  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:49.730477  167468 cri.go:89] found id: ""
	I1009 19:12:49.730492  167468 logs.go:282] 0 containers: []
	W1009 19:12:49.730498  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:49.730508  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:49.730525  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:49.760658  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:49.760676  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:49.829075  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:49.829099  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:49.841535  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:49.841555  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:49.901305  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:49.892835    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.893403    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.895008    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.895583    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.896553    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:49.892835    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.893403    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.895008    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.895583    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.896553    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:49.901316  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:49.901327  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:52.467860  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:52.478990  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:52.479046  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:52.507725  167468 cri.go:89] found id: ""
	I1009 19:12:52.507745  167468 logs.go:282] 0 containers: []
	W1009 19:12:52.507753  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:52.507759  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:52.507817  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:52.535190  167468 cri.go:89] found id: ""
	I1009 19:12:52.535210  167468 logs.go:282] 0 containers: []
	W1009 19:12:52.535219  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:52.535226  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:52.535277  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:52.562492  167468 cri.go:89] found id: ""
	I1009 19:12:52.562508  167468 logs.go:282] 0 containers: []
	W1009 19:12:52.562515  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:52.562520  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:52.562570  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:52.590535  167468 cri.go:89] found id: ""
	I1009 19:12:52.590556  167468 logs.go:282] 0 containers: []
	W1009 19:12:52.590563  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:52.590568  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:52.590619  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:52.617794  167468 cri.go:89] found id: ""
	I1009 19:12:52.617811  167468 logs.go:282] 0 containers: []
	W1009 19:12:52.617817  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:52.617822  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:52.617871  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:52.645640  167468 cri.go:89] found id: ""
	I1009 19:12:52.645657  167468 logs.go:282] 0 containers: []
	W1009 19:12:52.645663  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:52.645668  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:52.645725  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:52.673077  167468 cri.go:89] found id: ""
	I1009 19:12:52.673099  167468 logs.go:282] 0 containers: []
	W1009 19:12:52.673109  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:52.673121  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:52.673134  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:52.685322  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:52.685343  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:52.744140  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:52.736205    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.736792    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.738405    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.738829    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.740529    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:52.736205    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.736792    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.738405    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.738829    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.740529    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:52.744151  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:52.744161  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:52.804313  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:52.804337  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:52.835400  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:52.835423  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:55.406701  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:55.418704  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:55.418764  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:55.446462  167468 cri.go:89] found id: ""
	I1009 19:12:55.446482  167468 logs.go:282] 0 containers: []
	W1009 19:12:55.446500  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:55.446507  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:55.446565  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:55.474996  167468 cri.go:89] found id: ""
	I1009 19:12:55.475012  167468 logs.go:282] 0 containers: []
	W1009 19:12:55.475021  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:55.475026  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:55.475071  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:55.501499  167468 cri.go:89] found id: ""
	I1009 19:12:55.501517  167468 logs.go:282] 0 containers: []
	W1009 19:12:55.501538  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:55.501548  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:55.501615  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:55.529250  167468 cri.go:89] found id: ""
	I1009 19:12:55.529266  167468 logs.go:282] 0 containers: []
	W1009 19:12:55.529273  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:55.529278  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:55.529331  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:55.557673  167468 cri.go:89] found id: ""
	I1009 19:12:55.557697  167468 logs.go:282] 0 containers: []
	W1009 19:12:55.557705  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:55.557711  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:55.557782  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:55.584821  167468 cri.go:89] found id: ""
	I1009 19:12:55.584837  167468 logs.go:282] 0 containers: []
	W1009 19:12:55.584844  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:55.584848  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:55.584896  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:55.610337  167468 cri.go:89] found id: ""
	I1009 19:12:55.610353  167468 logs.go:282] 0 containers: []
	W1009 19:12:55.610359  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:55.610367  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:55.610394  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:55.640837  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:55.640856  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:55.707303  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:55.707327  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:55.719504  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:55.719524  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:55.777237  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:55.769773    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.770229    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.771763    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.772256    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.773793    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:55.769773    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.770229    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.771763    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.772256    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.773793    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:55.777249  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:55.777260  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:58.340087  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:58.351165  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:58.351219  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:58.378091  167468 cri.go:89] found id: ""
	I1009 19:12:58.378108  167468 logs.go:282] 0 containers: []
	W1009 19:12:58.378114  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:58.378119  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:58.378169  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:58.407571  167468 cri.go:89] found id: ""
	I1009 19:12:58.407589  167468 logs.go:282] 0 containers: []
	W1009 19:12:58.407598  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:58.407604  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:58.407653  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:58.436553  167468 cri.go:89] found id: ""
	I1009 19:12:58.436571  167468 logs.go:282] 0 containers: []
	W1009 19:12:58.436580  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:58.436586  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:58.436649  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:58.463773  167468 cri.go:89] found id: ""
	I1009 19:12:58.463789  167468 logs.go:282] 0 containers: []
	W1009 19:12:58.463795  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:58.463799  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:58.463859  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:58.490461  167468 cri.go:89] found id: ""
	I1009 19:12:58.490477  167468 logs.go:282] 0 containers: []
	W1009 19:12:58.490484  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:58.490488  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:58.490536  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:58.517574  167468 cri.go:89] found id: ""
	I1009 19:12:58.517591  167468 logs.go:282] 0 containers: []
	W1009 19:12:58.517598  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:58.517604  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:58.517653  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:58.544333  167468 cri.go:89] found id: ""
	I1009 19:12:58.544351  167468 logs.go:282] 0 containers: []
	W1009 19:12:58.544361  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:58.544371  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:58.544398  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:58.602923  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:58.594853    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.595424    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.596985    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.597443    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.599067    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:58.594853    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.595424    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.596985    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.597443    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.599067    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:58.602934  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:58.602949  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:58.666550  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:58.666572  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:58.696671  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:58.696690  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:58.763866  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:58.763888  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:01.277960  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:01.288975  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:01.289031  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:01.315640  167468 cri.go:89] found id: ""
	I1009 19:13:01.315656  167468 logs.go:282] 0 containers: []
	W1009 19:13:01.315694  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:01.315702  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:01.315763  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:01.344136  167468 cri.go:89] found id: ""
	I1009 19:13:01.344152  167468 logs.go:282] 0 containers: []
	W1009 19:13:01.344159  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:01.344164  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:01.344217  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:01.372892  167468 cri.go:89] found id: ""
	I1009 19:13:01.372907  167468 logs.go:282] 0 containers: []
	W1009 19:13:01.372914  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:01.372919  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:01.372973  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:01.399606  167468 cri.go:89] found id: ""
	I1009 19:13:01.399626  167468 logs.go:282] 0 containers: []
	W1009 19:13:01.399636  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:01.399643  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:01.399697  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:01.427550  167468 cri.go:89] found id: ""
	I1009 19:13:01.427570  167468 logs.go:282] 0 containers: []
	W1009 19:13:01.427581  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:01.427592  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:01.427647  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:01.454668  167468 cri.go:89] found id: ""
	I1009 19:13:01.454686  167468 logs.go:282] 0 containers: []
	W1009 19:13:01.454693  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:01.454698  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:01.454750  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:01.481897  167468 cri.go:89] found id: ""
	I1009 19:13:01.481916  167468 logs.go:282] 0 containers: []
	W1009 19:13:01.481926  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:01.481939  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:01.481955  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:01.555443  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:01.555466  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:01.567729  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:01.567749  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:01.627530  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:01.618960    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.620263    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.620839    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.622496    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.623021    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:01.618960    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.620263    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.620839    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.622496    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.623021    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:01.627544  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:01.627559  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:01.688247  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:01.688274  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:04.220134  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:04.231353  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:04.231446  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:04.258512  167468 cri.go:89] found id: ""
	I1009 19:13:04.258528  167468 logs.go:282] 0 containers: []
	W1009 19:13:04.258534  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:04.258539  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:04.258586  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:04.285536  167468 cri.go:89] found id: ""
	I1009 19:13:04.285552  167468 logs.go:282] 0 containers: []
	W1009 19:13:04.285558  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:04.285564  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:04.285612  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:04.314877  167468 cri.go:89] found id: ""
	I1009 19:13:04.314902  167468 logs.go:282] 0 containers: []
	W1009 19:13:04.314909  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:04.314914  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:04.314968  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:04.342074  167468 cri.go:89] found id: ""
	I1009 19:13:04.342091  167468 logs.go:282] 0 containers: []
	W1009 19:13:04.342101  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:04.342108  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:04.342168  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:04.367935  167468 cri.go:89] found id: ""
	I1009 19:13:04.367951  167468 logs.go:282] 0 containers: []
	W1009 19:13:04.367959  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:04.367964  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:04.368012  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:04.394817  167468 cri.go:89] found id: ""
	I1009 19:13:04.394837  167468 logs.go:282] 0 containers: []
	W1009 19:13:04.394846  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:04.394854  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:04.394919  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:04.421650  167468 cri.go:89] found id: ""
	I1009 19:13:04.421670  167468 logs.go:282] 0 containers: []
	W1009 19:13:04.421680  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:04.421691  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:04.421712  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:04.490071  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:04.490097  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:04.502160  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:04.502179  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:04.561004  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:04.553527    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.554086    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.555768    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.556209    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.557463    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:04.553527    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.554086    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.555768    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.556209    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.557463    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:04.561015  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:04.561026  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:04.627255  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:04.627292  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:07.159560  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:07.170893  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:07.170944  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:07.198061  167468 cri.go:89] found id: ""
	I1009 19:13:07.198081  167468 logs.go:282] 0 containers: []
	W1009 19:13:07.198088  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:07.198094  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:07.198144  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:07.226131  167468 cri.go:89] found id: ""
	I1009 19:13:07.226150  167468 logs.go:282] 0 containers: []
	W1009 19:13:07.226157  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:07.226162  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:07.226220  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:07.254150  167468 cri.go:89] found id: ""
	I1009 19:13:07.254171  167468 logs.go:282] 0 containers: []
	W1009 19:13:07.254181  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:07.254188  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:07.254244  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:07.281984  167468 cri.go:89] found id: ""
	I1009 19:13:07.282004  167468 logs.go:282] 0 containers: []
	W1009 19:13:07.282015  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:07.282023  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:07.282087  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:07.309721  167468 cri.go:89] found id: ""
	I1009 19:13:07.309741  167468 logs.go:282] 0 containers: []
	W1009 19:13:07.309747  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:07.309752  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:07.309807  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:07.336611  167468 cri.go:89] found id: ""
	I1009 19:13:07.336629  167468 logs.go:282] 0 containers: []
	W1009 19:13:07.336636  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:07.336641  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:07.336698  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:07.363039  167468 cri.go:89] found id: ""
	I1009 19:13:07.363059  167468 logs.go:282] 0 containers: []
	W1009 19:13:07.363065  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:07.363074  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:07.363084  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:07.433229  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:07.433254  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:07.445762  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:07.445782  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:07.506602  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:07.497036    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.497750    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.499446    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.501191    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.501817    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:07.497036    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.497750    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.499446    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.501191    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.501817    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:07.506621  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:07.506637  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:07.570528  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:07.570555  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:10.103498  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:10.114559  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:10.114618  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:10.140877  167468 cri.go:89] found id: ""
	I1009 19:13:10.140904  167468 logs.go:282] 0 containers: []
	W1009 19:13:10.140915  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:10.140921  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:10.140976  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:10.167893  167468 cri.go:89] found id: ""
	I1009 19:13:10.167928  167468 logs.go:282] 0 containers: []
	W1009 19:13:10.167938  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:10.167945  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:10.168001  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:10.195691  167468 cri.go:89] found id: ""
	I1009 19:13:10.195708  167468 logs.go:282] 0 containers: []
	W1009 19:13:10.195737  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:10.195744  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:10.195806  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:10.222647  167468 cri.go:89] found id: ""
	I1009 19:13:10.222665  167468 logs.go:282] 0 containers: []
	W1009 19:13:10.222671  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:10.222677  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:10.222729  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:10.249706  167468 cri.go:89] found id: ""
	I1009 19:13:10.249725  167468 logs.go:282] 0 containers: []
	W1009 19:13:10.249735  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:10.249741  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:10.249805  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:10.277282  167468 cri.go:89] found id: ""
	I1009 19:13:10.277302  167468 logs.go:282] 0 containers: []
	W1009 19:13:10.277311  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:10.277317  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:10.277395  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:10.305128  167468 cri.go:89] found id: ""
	I1009 19:13:10.305144  167468 logs.go:282] 0 containers: []
	W1009 19:13:10.305151  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:10.305159  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:10.305171  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:10.366874  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:10.359143    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.359783    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.361001    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.361659    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.363247    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:10.359143    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.359783    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.361001    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.361659    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.363247    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:10.366887  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:10.366899  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:10.431608  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:10.431633  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:10.463358  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:10.463402  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:10.531897  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:10.531921  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:13.047007  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:13.058221  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:13.058285  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:13.086231  167468 cri.go:89] found id: ""
	I1009 19:13:13.086259  167468 logs.go:282] 0 containers: []
	W1009 19:13:13.086266  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:13.086272  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:13.086326  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:13.111982  167468 cri.go:89] found id: ""
	I1009 19:13:13.111999  167468 logs.go:282] 0 containers: []
	W1009 19:13:13.112006  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:13.112011  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:13.112068  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:13.138979  167468 cri.go:89] found id: ""
	I1009 19:13:13.139004  167468 logs.go:282] 0 containers: []
	W1009 19:13:13.139011  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:13.139016  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:13.139067  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:13.167881  167468 cri.go:89] found id: ""
	I1009 19:13:13.167902  167468 logs.go:282] 0 containers: []
	W1009 19:13:13.167913  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:13.167920  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:13.167974  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:13.197025  167468 cri.go:89] found id: ""
	I1009 19:13:13.197040  167468 logs.go:282] 0 containers: []
	W1009 19:13:13.197047  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:13.197052  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:13.197110  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:13.224797  167468 cri.go:89] found id: ""
	I1009 19:13:13.224813  167468 logs.go:282] 0 containers: []
	W1009 19:13:13.224819  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:13.224824  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:13.224868  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:13.251310  167468 cri.go:89] found id: ""
	I1009 19:13:13.251329  167468 logs.go:282] 0 containers: []
	W1009 19:13:13.251339  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:13.251351  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:13.251370  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:13.263868  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:13.263890  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:13.322120  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:13.314752    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.315273    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.316869    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.317321    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.318642    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:13.314752    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.315273    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.316869    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.317321    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.318642    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:13.322130  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:13.322141  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:13.386957  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:13.386982  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:13.419121  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:13.419142  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:15.986307  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:15.997455  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:15.997514  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:16.023786  167468 cri.go:89] found id: ""
	I1009 19:13:16.023803  167468 logs.go:282] 0 containers: []
	W1009 19:13:16.023810  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:16.023815  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:16.023862  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:16.051180  167468 cri.go:89] found id: ""
	I1009 19:13:16.051201  167468 logs.go:282] 0 containers: []
	W1009 19:13:16.051211  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:16.051218  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:16.051269  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:16.078469  167468 cri.go:89] found id: ""
	I1009 19:13:16.078489  167468 logs.go:282] 0 containers: []
	W1009 19:13:16.078501  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:16.078507  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:16.078570  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:16.106922  167468 cri.go:89] found id: ""
	I1009 19:13:16.106942  167468 logs.go:282] 0 containers: []
	W1009 19:13:16.106949  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:16.106953  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:16.107015  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:16.134957  167468 cri.go:89] found id: ""
	I1009 19:13:16.134974  167468 logs.go:282] 0 containers: []
	W1009 19:13:16.134985  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:16.134990  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:16.135038  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:16.162970  167468 cri.go:89] found id: ""
	I1009 19:13:16.162986  167468 logs.go:282] 0 containers: []
	W1009 19:13:16.162992  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:16.162997  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:16.163062  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:16.190741  167468 cri.go:89] found id: ""
	I1009 19:13:16.190759  167468 logs.go:282] 0 containers: []
	W1009 19:13:16.190773  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:16.190782  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:16.190793  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:16.256749  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:16.256775  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:16.268841  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:16.268862  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:16.328040  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:16.319195    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.319979    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.321948    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.322864    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.323494    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:16.319195    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.319979    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.321948    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.322864    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.323494    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:16.328057  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:16.328070  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:16.391596  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:16.391621  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:18.923965  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:18.935342  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:18.935407  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:18.963928  167468 cri.go:89] found id: ""
	I1009 19:13:18.963948  167468 logs.go:282] 0 containers: []
	W1009 19:13:18.963954  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:18.963959  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:18.964008  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:18.991109  167468 cri.go:89] found id: ""
	I1009 19:13:18.991125  167468 logs.go:282] 0 containers: []
	W1009 19:13:18.991131  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:18.991136  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:18.991183  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:19.018365  167468 cri.go:89] found id: ""
	I1009 19:13:19.018402  167468 logs.go:282] 0 containers: []
	W1009 19:13:19.018412  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:19.018418  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:19.018469  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:19.045613  167468 cri.go:89] found id: ""
	I1009 19:13:19.045629  167468 logs.go:282] 0 containers: []
	W1009 19:13:19.045638  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:19.045645  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:19.045705  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:19.073406  167468 cri.go:89] found id: ""
	I1009 19:13:19.073425  167468 logs.go:282] 0 containers: []
	W1009 19:13:19.073432  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:19.073437  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:19.073492  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:19.100393  167468 cri.go:89] found id: ""
	I1009 19:13:19.100412  167468 logs.go:282] 0 containers: []
	W1009 19:13:19.100418  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:19.100423  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:19.100471  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:19.126851  167468 cri.go:89] found id: ""
	I1009 19:13:19.126867  167468 logs.go:282] 0 containers: []
	W1009 19:13:19.126873  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:19.126880  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:19.126892  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:19.187263  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:19.179205    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.180148    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.181817    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.182282    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.183463    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:19.179205    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.180148    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.181817    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.182282    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.183463    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:19.187275  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:19.187287  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:19.249235  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:19.249260  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:19.280761  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:19.280782  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:19.348861  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:19.348882  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:21.863867  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:21.875320  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:21.875402  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:21.901142  167468 cri.go:89] found id: ""
	I1009 19:13:21.901162  167468 logs.go:282] 0 containers: []
	W1009 19:13:21.901172  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:21.901179  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:21.901245  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:21.928133  167468 cri.go:89] found id: ""
	I1009 19:13:21.928152  167468 logs.go:282] 0 containers: []
	W1009 19:13:21.928158  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:21.928164  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:21.928212  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:21.955553  167468 cri.go:89] found id: ""
	I1009 19:13:21.955569  167468 logs.go:282] 0 containers: []
	W1009 19:13:21.955576  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:21.955581  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:21.955629  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:21.983034  167468 cri.go:89] found id: ""
	I1009 19:13:21.983051  167468 logs.go:282] 0 containers: []
	W1009 19:13:21.983059  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:21.983066  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:21.983121  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:22.010710  167468 cri.go:89] found id: ""
	I1009 19:13:22.010728  167468 logs.go:282] 0 containers: []
	W1009 19:13:22.010736  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:22.010741  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:22.010806  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:22.036790  167468 cri.go:89] found id: ""
	I1009 19:13:22.036806  167468 logs.go:282] 0 containers: []
	W1009 19:13:22.036813  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:22.036818  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:22.036863  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:22.063811  167468 cri.go:89] found id: ""
	I1009 19:13:22.063829  167468 logs.go:282] 0 containers: []
	W1009 19:13:22.063835  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:22.063844  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:22.063853  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:22.130862  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:22.130888  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:22.143167  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:22.143188  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:22.204009  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:22.195809    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.196397    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.198003    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.198478    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.200063    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:22.195809    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.196397    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.198003    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.198478    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.200063    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:22.204024  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:22.204036  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:22.268771  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:22.268794  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:24.801350  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:24.812363  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:24.812431  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:24.840646  167468 cri.go:89] found id: ""
	I1009 19:13:24.840663  167468 logs.go:282] 0 containers: []
	W1009 19:13:24.840671  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:24.840677  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:24.840739  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:24.867359  167468 cri.go:89] found id: ""
	I1009 19:13:24.867392  167468 logs.go:282] 0 containers: []
	W1009 19:13:24.867402  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:24.867409  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:24.867470  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:24.894684  167468 cri.go:89] found id: ""
	I1009 19:13:24.894701  167468 logs.go:282] 0 containers: []
	W1009 19:13:24.894707  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:24.894712  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:24.894761  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:24.922658  167468 cri.go:89] found id: ""
	I1009 19:13:24.922678  167468 logs.go:282] 0 containers: []
	W1009 19:13:24.922688  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:24.922694  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:24.922751  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:24.949879  167468 cri.go:89] found id: ""
	I1009 19:13:24.949895  167468 logs.go:282] 0 containers: []
	W1009 19:13:24.949901  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:24.949906  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:24.949964  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:24.976423  167468 cri.go:89] found id: ""
	I1009 19:13:24.976441  167468 logs.go:282] 0 containers: []
	W1009 19:13:24.976450  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:24.976457  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:24.976512  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:25.002011  167468 cri.go:89] found id: ""
	I1009 19:13:25.002028  167468 logs.go:282] 0 containers: []
	W1009 19:13:25.002034  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:25.002042  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:25.002054  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:25.073024  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:25.073048  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:25.085208  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:25.085228  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:25.144068  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:25.136709    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.137237    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.138809    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.139304    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.140539    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:25.136709    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.137237    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.138809    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.139304    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.140539    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:25.144082  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:25.144098  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:25.208021  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:25.208044  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:27.740581  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:27.751702  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:27.751756  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:27.778066  167468 cri.go:89] found id: ""
	I1009 19:13:27.778082  167468 logs.go:282] 0 containers: []
	W1009 19:13:27.778088  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:27.778093  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:27.778139  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:27.806166  167468 cri.go:89] found id: ""
	I1009 19:13:27.806183  167468 logs.go:282] 0 containers: []
	W1009 19:13:27.806192  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:27.806198  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:27.806261  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:27.833747  167468 cri.go:89] found id: ""
	I1009 19:13:27.833783  167468 logs.go:282] 0 containers: []
	W1009 19:13:27.833793  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:27.833800  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:27.833859  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:27.861452  167468 cri.go:89] found id: ""
	I1009 19:13:27.861471  167468 logs.go:282] 0 containers: []
	W1009 19:13:27.861478  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:27.861482  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:27.861543  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:27.889001  167468 cri.go:89] found id: ""
	I1009 19:13:27.889017  167468 logs.go:282] 0 containers: []
	W1009 19:13:27.889023  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:27.889030  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:27.889090  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:27.915709  167468 cri.go:89] found id: ""
	I1009 19:13:27.915729  167468 logs.go:282] 0 containers: []
	W1009 19:13:27.915739  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:27.915746  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:27.915802  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:27.943121  167468 cri.go:89] found id: ""
	I1009 19:13:27.943140  167468 logs.go:282] 0 containers: []
	W1009 19:13:27.943146  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:27.943156  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:27.943167  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:28.010452  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:28.010475  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:28.022860  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:28.022878  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:28.080632  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:28.072836    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.073401    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.074954    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.075364    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.076931    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:28.072836    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.073401    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.074954    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.075364    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.076931    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:28.080645  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:28.080658  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:28.144679  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:28.144702  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:30.676105  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:30.687597  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:30.687649  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:30.714683  167468 cri.go:89] found id: ""
	I1009 19:13:30.714700  167468 logs.go:282] 0 containers: []
	W1009 19:13:30.714707  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:30.714712  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:30.714776  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:30.742271  167468 cri.go:89] found id: ""
	I1009 19:13:30.742292  167468 logs.go:282] 0 containers: []
	W1009 19:13:30.742301  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:30.742308  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:30.742397  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:30.769357  167468 cri.go:89] found id: ""
	I1009 19:13:30.769388  167468 logs.go:282] 0 containers: []
	W1009 19:13:30.769397  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:30.769404  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:30.769463  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:30.795938  167468 cri.go:89] found id: ""
	I1009 19:13:30.795955  167468 logs.go:282] 0 containers: []
	W1009 19:13:30.795962  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:30.795968  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:30.796029  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:30.821704  167468 cri.go:89] found id: ""
	I1009 19:13:30.821726  167468 logs.go:282] 0 containers: []
	W1009 19:13:30.821736  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:30.821743  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:30.821813  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:30.848828  167468 cri.go:89] found id: ""
	I1009 19:13:30.848847  167468 logs.go:282] 0 containers: []
	W1009 19:13:30.848853  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:30.848859  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:30.848906  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:30.876298  167468 cri.go:89] found id: ""
	I1009 19:13:30.876318  167468 logs.go:282] 0 containers: []
	W1009 19:13:30.876328  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:30.876338  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:30.876357  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:30.947427  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:30.947451  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:30.959445  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:30.959462  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:31.017292  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:31.009627    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.010482    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.011538    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.012034    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.013579    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:31.009627    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.010482    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.011538    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.012034    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.013579    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:31.017303  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:31.017318  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:31.080462  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:31.080485  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:33.612293  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:33.623432  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:33.623482  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:33.650758  167468 cri.go:89] found id: ""
	I1009 19:13:33.650776  167468 logs.go:282] 0 containers: []
	W1009 19:13:33.650783  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:33.650789  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:33.650844  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:33.678965  167468 cri.go:89] found id: ""
	I1009 19:13:33.678981  167468 logs.go:282] 0 containers: []
	W1009 19:13:33.678988  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:33.678992  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:33.679068  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:33.709733  167468 cri.go:89] found id: ""
	I1009 19:13:33.709754  167468 logs.go:282] 0 containers: []
	W1009 19:13:33.709762  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:33.709769  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:33.709899  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:33.740843  167468 cri.go:89] found id: ""
	I1009 19:13:33.740860  167468 logs.go:282] 0 containers: []
	W1009 19:13:33.740867  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:33.740872  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:33.740923  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:33.768607  167468 cri.go:89] found id: ""
	I1009 19:13:33.768624  167468 logs.go:282] 0 containers: []
	W1009 19:13:33.768631  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:33.768636  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:33.768685  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:33.795766  167468 cri.go:89] found id: ""
	I1009 19:13:33.795783  167468 logs.go:282] 0 containers: []
	W1009 19:13:33.795790  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:33.795796  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:33.795851  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:33.824447  167468 cri.go:89] found id: ""
	I1009 19:13:33.824468  167468 logs.go:282] 0 containers: []
	W1009 19:13:33.824477  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:33.824489  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:33.824505  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:33.886369  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:33.878113    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.878720    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.880311    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.880950    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.882576    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:33.878113    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.878720    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.880311    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.880950    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.882576    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:33.886403  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:33.886419  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:33.948841  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:33.948874  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:33.980307  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:33.980330  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:34.048912  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:34.048944  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:36.564162  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:36.576125  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:36.576178  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:36.604219  167468 cri.go:89] found id: ""
	I1009 19:13:36.604235  167468 logs.go:282] 0 containers: []
	W1009 19:13:36.604242  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:36.604246  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:36.604297  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:36.631435  167468 cri.go:89] found id: ""
	I1009 19:13:36.631455  167468 logs.go:282] 0 containers: []
	W1009 19:13:36.631463  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:36.631468  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:36.631522  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:36.658905  167468 cri.go:89] found id: ""
	I1009 19:13:36.658925  167468 logs.go:282] 0 containers: []
	W1009 19:13:36.658932  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:36.658941  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:36.659003  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:36.687919  167468 cri.go:89] found id: ""
	I1009 19:13:36.687941  167468 logs.go:282] 0 containers: []
	W1009 19:13:36.687948  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:36.687963  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:36.688010  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:36.715354  167468 cri.go:89] found id: ""
	I1009 19:13:36.715372  167468 logs.go:282] 0 containers: []
	W1009 19:13:36.715398  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:36.715405  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:36.715466  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:36.743207  167468 cri.go:89] found id: ""
	I1009 19:13:36.743224  167468 logs.go:282] 0 containers: []
	W1009 19:13:36.743238  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:36.743243  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:36.743291  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:36.770612  167468 cri.go:89] found id: ""
	I1009 19:13:36.770629  167468 logs.go:282] 0 containers: []
	W1009 19:13:36.770636  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:36.770645  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:36.770656  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:36.836830  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:36.836856  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:36.849433  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:36.849452  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:36.908266  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:36.900497    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.901238    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.902808    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.903266    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.904594    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:36.900497    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.901238    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.902808    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.903266    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.904594    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:36.908283  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:36.908297  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:36.975244  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:36.975275  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:39.505862  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:39.516820  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:39.516888  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:39.543164  167468 cri.go:89] found id: ""
	I1009 19:13:39.543180  167468 logs.go:282] 0 containers: []
	W1009 19:13:39.543186  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:39.543191  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:39.543240  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:39.569192  167468 cri.go:89] found id: ""
	I1009 19:13:39.569212  167468 logs.go:282] 0 containers: []
	W1009 19:13:39.569221  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:39.569227  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:39.569287  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:39.596196  167468 cri.go:89] found id: ""
	I1009 19:13:39.596213  167468 logs.go:282] 0 containers: []
	W1009 19:13:39.596219  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:39.596224  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:39.596271  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:39.622067  167468 cri.go:89] found id: ""
	I1009 19:13:39.622087  167468 logs.go:282] 0 containers: []
	W1009 19:13:39.622093  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:39.622098  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:39.622152  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:39.649128  167468 cri.go:89] found id: ""
	I1009 19:13:39.649145  167468 logs.go:282] 0 containers: []
	W1009 19:13:39.649151  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:39.649156  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:39.649202  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:39.674991  167468 cri.go:89] found id: ""
	I1009 19:13:39.675010  167468 logs.go:282] 0 containers: []
	W1009 19:13:39.675020  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:39.675027  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:39.675129  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:39.702254  167468 cri.go:89] found id: ""
	I1009 19:13:39.702274  167468 logs.go:282] 0 containers: []
	W1009 19:13:39.702284  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:39.702295  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:39.702307  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:39.774369  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:39.774400  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:39.786946  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:39.786967  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:39.846655  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:39.839086   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.839592   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.841208   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.841703   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.843295   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:39.839086   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.839592   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.841208   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.841703   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.843295   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:39.846669  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:39.846682  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:39.910311  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:39.910334  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:42.443183  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:42.454133  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:42.454185  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:42.481698  167468 cri.go:89] found id: ""
	I1009 19:13:42.481718  167468 logs.go:282] 0 containers: []
	W1009 19:13:42.481727  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:42.481733  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:42.481786  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:42.508494  167468 cri.go:89] found id: ""
	I1009 19:13:42.508514  167468 logs.go:282] 0 containers: []
	W1009 19:13:42.508524  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:42.508531  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:42.508585  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:42.535987  167468 cri.go:89] found id: ""
	I1009 19:13:42.536004  167468 logs.go:282] 0 containers: []
	W1009 19:13:42.536025  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:42.536034  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:42.536096  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:42.563210  167468 cri.go:89] found id: ""
	I1009 19:13:42.563227  167468 logs.go:282] 0 containers: []
	W1009 19:13:42.563234  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:42.563239  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:42.563285  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:42.590575  167468 cri.go:89] found id: ""
	I1009 19:13:42.590592  167468 logs.go:282] 0 containers: []
	W1009 19:13:42.590598  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:42.590603  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:42.590649  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:42.616425  167468 cri.go:89] found id: ""
	I1009 19:13:42.616440  167468 logs.go:282] 0 containers: []
	W1009 19:13:42.616446  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:42.616451  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:42.616494  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:42.644221  167468 cri.go:89] found id: ""
	I1009 19:13:42.644239  167468 logs.go:282] 0 containers: []
	W1009 19:13:42.644248  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:42.644259  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:42.644272  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:42.712601  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:42.712623  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:42.724833  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:42.724851  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:42.782650  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:42.775609   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.776076   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.777677   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.778114   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.779450   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:42.775609   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.776076   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.777677   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.778114   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.779450   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:42.782664  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:42.782682  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:42.846741  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:42.846763  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:45.378614  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:45.389636  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:45.389712  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:45.415855  167468 cri.go:89] found id: ""
	I1009 19:13:45.415873  167468 logs.go:282] 0 containers: []
	W1009 19:13:45.415880  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:45.415886  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:45.415934  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:45.444082  167468 cri.go:89] found id: ""
	I1009 19:13:45.444099  167468 logs.go:282] 0 containers: []
	W1009 19:13:45.444106  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:45.444111  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:45.444159  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:45.470687  167468 cri.go:89] found id: ""
	I1009 19:13:45.470707  167468 logs.go:282] 0 containers: []
	W1009 19:13:45.470718  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:45.470725  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:45.470780  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:45.499546  167468 cri.go:89] found id: ""
	I1009 19:13:45.499563  167468 logs.go:282] 0 containers: []
	W1009 19:13:45.499569  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:45.499580  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:45.499627  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:45.527809  167468 cri.go:89] found id: ""
	I1009 19:13:45.527828  167468 logs.go:282] 0 containers: []
	W1009 19:13:45.527837  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:45.527843  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:45.527895  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:45.555994  167468 cri.go:89] found id: ""
	I1009 19:13:45.556012  167468 logs.go:282] 0 containers: []
	W1009 19:13:45.556022  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:45.556030  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:45.556162  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:45.583148  167468 cri.go:89] found id: ""
	I1009 19:13:45.583165  167468 logs.go:282] 0 containers: []
	W1009 19:13:45.583171  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:45.583180  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:45.583191  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:45.653733  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:45.653757  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:45.665821  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:45.665842  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:45.723605  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:45.715791   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.716399   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.718036   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.718509   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.719963   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:45.715791   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.716399   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.718036   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.718509   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.719963   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:45.723618  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:45.723632  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:45.785630  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:45.785651  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:48.317201  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:48.328498  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:48.328563  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:48.356507  167468 cri.go:89] found id: ""
	I1009 19:13:48.356526  167468 logs.go:282] 0 containers: []
	W1009 19:13:48.356534  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:48.356542  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:48.356604  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:48.385398  167468 cri.go:89] found id: ""
	I1009 19:13:48.385416  167468 logs.go:282] 0 containers: []
	W1009 19:13:48.385422  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:48.385427  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:48.385477  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:48.412259  167468 cri.go:89] found id: ""
	I1009 19:13:48.412276  167468 logs.go:282] 0 containers: []
	W1009 19:13:48.412284  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:48.412289  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:48.412339  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:48.440453  167468 cri.go:89] found id: ""
	I1009 19:13:48.440471  167468 logs.go:282] 0 containers: []
	W1009 19:13:48.440479  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:48.440486  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:48.440549  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:48.469351  167468 cri.go:89] found id: ""
	I1009 19:13:48.469367  167468 logs.go:282] 0 containers: []
	W1009 19:13:48.469374  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:48.469396  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:48.469457  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:48.498335  167468 cri.go:89] found id: ""
	I1009 19:13:48.498349  167468 logs.go:282] 0 containers: []
	W1009 19:13:48.498355  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:48.498360  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:48.498424  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:48.525258  167468 cri.go:89] found id: ""
	I1009 19:13:48.525275  167468 logs.go:282] 0 containers: []
	W1009 19:13:48.525282  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:48.525292  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:48.525307  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:48.590425  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:48.590448  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:48.602233  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:48.602252  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:48.660259  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:48.653067   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.653655   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.655299   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.655831   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.656956   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:48.653067   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.653655   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.655299   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.655831   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.656956   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:48.660269  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:48.660281  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:48.724597  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:48.724621  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:51.257337  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:51.269111  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:51.269166  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:51.296195  167468 cri.go:89] found id: ""
	I1009 19:13:51.296210  167468 logs.go:282] 0 containers: []
	W1009 19:13:51.296216  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:51.296221  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:51.296282  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:51.322519  167468 cri.go:89] found id: ""
	I1009 19:13:51.322536  167468 logs.go:282] 0 containers: []
	W1009 19:13:51.322542  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:51.322547  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:51.322594  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:51.349587  167468 cri.go:89] found id: ""
	I1009 19:13:51.349603  167468 logs.go:282] 0 containers: []
	W1009 19:13:51.349609  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:51.349614  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:51.349667  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:51.377783  167468 cri.go:89] found id: ""
	I1009 19:13:51.377801  167468 logs.go:282] 0 containers: []
	W1009 19:13:51.377809  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:51.377814  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:51.377865  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:51.404656  167468 cri.go:89] found id: ""
	I1009 19:13:51.404672  167468 logs.go:282] 0 containers: []
	W1009 19:13:51.404681  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:51.404688  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:51.404747  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:51.430810  167468 cri.go:89] found id: ""
	I1009 19:13:51.430826  167468 logs.go:282] 0 containers: []
	W1009 19:13:51.430832  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:51.430838  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:51.430896  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:51.457166  167468 cri.go:89] found id: ""
	I1009 19:13:51.457189  167468 logs.go:282] 0 containers: []
	W1009 19:13:51.457200  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:51.457211  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:51.457223  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:51.521965  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:51.521988  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:51.534521  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:51.534545  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:51.593719  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:51.585963   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.586439   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.588046   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.588481   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.590012   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:51.585963   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.586439   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.588046   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.588481   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.590012   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:51.593731  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:51.593740  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:51.654584  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:51.654606  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:54.187112  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:54.198337  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:54.198414  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:54.225550  167468 cri.go:89] found id: ""
	I1009 19:13:54.225570  167468 logs.go:282] 0 containers: []
	W1009 19:13:54.225584  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:54.225591  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:54.225639  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:54.252848  167468 cri.go:89] found id: ""
	I1009 19:13:54.252864  167468 logs.go:282] 0 containers: []
	W1009 19:13:54.252871  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:54.252876  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:54.252936  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:54.279625  167468 cri.go:89] found id: ""
	I1009 19:13:54.279642  167468 logs.go:282] 0 containers: []
	W1009 19:13:54.279648  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:54.279659  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:54.279715  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:54.307491  167468 cri.go:89] found id: ""
	I1009 19:13:54.307507  167468 logs.go:282] 0 containers: []
	W1009 19:13:54.307513  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:54.307518  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:54.307571  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:54.335023  167468 cri.go:89] found id: ""
	I1009 19:13:54.335048  167468 logs.go:282] 0 containers: []
	W1009 19:13:54.335056  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:54.335063  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:54.335121  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:54.362616  167468 cri.go:89] found id: ""
	I1009 19:13:54.362633  167468 logs.go:282] 0 containers: []
	W1009 19:13:54.362640  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:54.362645  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:54.362719  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:54.391155  167468 cri.go:89] found id: ""
	I1009 19:13:54.391175  167468 logs.go:282] 0 containers: []
	W1009 19:13:54.391186  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:54.391197  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:54.391212  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:54.452190  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:54.444274   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.444870   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.446625   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.447165   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.448804   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:54.444274   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.444870   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.446625   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.447165   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.448804   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:54.452204  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:54.452219  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:54.514282  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:54.514306  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:54.544238  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:54.544256  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:54.612145  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:54.612173  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:57.125509  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:57.136612  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:57.136699  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:57.162822  167468 cri.go:89] found id: ""
	I1009 19:13:57.162841  167468 logs.go:282] 0 containers: []
	W1009 19:13:57.162849  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:57.162854  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:57.162903  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:57.190000  167468 cri.go:89] found id: ""
	I1009 19:13:57.190018  167468 logs.go:282] 0 containers: []
	W1009 19:13:57.190025  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:57.190030  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:57.190077  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:57.217780  167468 cri.go:89] found id: ""
	I1009 19:13:57.217801  167468 logs.go:282] 0 containers: []
	W1009 19:13:57.217812  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:57.217819  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:57.217876  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:57.243876  167468 cri.go:89] found id: ""
	I1009 19:13:57.243898  167468 logs.go:282] 0 containers: []
	W1009 19:13:57.243908  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:57.243914  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:57.243976  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:57.270405  167468 cri.go:89] found id: ""
	I1009 19:13:57.270425  167468 logs.go:282] 0 containers: []
	W1009 19:13:57.270432  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:57.270437  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:57.270486  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:57.299825  167468 cri.go:89] found id: ""
	I1009 19:13:57.299841  167468 logs.go:282] 0 containers: []
	W1009 19:13:57.299848  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:57.299853  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:57.299914  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:57.327570  167468 cri.go:89] found id: ""
	I1009 19:13:57.327587  167468 logs.go:282] 0 containers: []
	W1009 19:13:57.327594  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:57.327603  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:57.327615  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:57.359019  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:57.359050  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:57.428142  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:57.428165  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:57.440563  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:57.440584  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:57.500538  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:57.492802   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.493421   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.495026   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.495441   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.497020   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:57.492802   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.493421   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.495026   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.495441   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.497020   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:57.500549  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:57.500567  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:00.065761  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:00.077245  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:00.077320  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:00.106125  167468 cri.go:89] found id: ""
	I1009 19:14:00.106140  167468 logs.go:282] 0 containers: []
	W1009 19:14:00.106146  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:00.106151  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:00.106202  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:00.134788  167468 cri.go:89] found id: ""
	I1009 19:14:00.134807  167468 logs.go:282] 0 containers: []
	W1009 19:14:00.134818  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:00.134824  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:00.134891  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:00.163060  167468 cri.go:89] found id: ""
	I1009 19:14:00.163076  167468 logs.go:282] 0 containers: []
	W1009 19:14:00.163082  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:00.163087  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:00.163135  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:00.192113  167468 cri.go:89] found id: ""
	I1009 19:14:00.192133  167468 logs.go:282] 0 containers: []
	W1009 19:14:00.192143  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:00.192149  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:00.192210  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:00.218783  167468 cri.go:89] found id: ""
	I1009 19:14:00.218804  167468 logs.go:282] 0 containers: []
	W1009 19:14:00.218811  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:00.218817  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:00.218868  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:00.246603  167468 cri.go:89] found id: ""
	I1009 19:14:00.246620  167468 logs.go:282] 0 containers: []
	W1009 19:14:00.246627  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:00.246632  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:00.246683  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:00.274697  167468 cri.go:89] found id: ""
	I1009 19:14:00.274713  167468 logs.go:282] 0 containers: []
	W1009 19:14:00.274719  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:00.274729  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:00.274739  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:00.287013  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:00.287030  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:00.348225  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:00.340024   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.340555   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.342294   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.342898   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.344448   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:00.340024   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.340555   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.342294   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.342898   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.344448   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:00.348243  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:00.348255  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:00.414970  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:00.415009  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:00.446010  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:00.446031  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:03.018679  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:03.030482  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:03.030538  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:03.059098  167468 cri.go:89] found id: ""
	I1009 19:14:03.059119  167468 logs.go:282] 0 containers: []
	W1009 19:14:03.059129  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:03.059137  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:03.059195  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:03.086255  167468 cri.go:89] found id: ""
	I1009 19:14:03.086273  167468 logs.go:282] 0 containers: []
	W1009 19:14:03.086279  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:03.086286  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:03.086351  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:03.113417  167468 cri.go:89] found id: ""
	I1009 19:14:03.113437  167468 logs.go:282] 0 containers: []
	W1009 19:14:03.113444  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:03.113450  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:03.113507  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:03.141043  167468 cri.go:89] found id: ""
	I1009 19:14:03.141064  167468 logs.go:282] 0 containers: []
	W1009 19:14:03.141073  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:03.141080  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:03.141139  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:03.168482  167468 cri.go:89] found id: ""
	I1009 19:14:03.168500  167468 logs.go:282] 0 containers: []
	W1009 19:14:03.168510  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:03.168515  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:03.168562  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:03.195613  167468 cri.go:89] found id: ""
	I1009 19:14:03.195634  167468 logs.go:282] 0 containers: []
	W1009 19:14:03.195640  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:03.195648  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:03.195700  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:03.223082  167468 cri.go:89] found id: ""
	I1009 19:14:03.223102  167468 logs.go:282] 0 containers: []
	W1009 19:14:03.223113  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:03.223126  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:03.223140  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:03.289799  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:03.289826  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:03.302088  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:03.302108  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:03.361951  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:03.354529   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.355199   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.356810   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.357258   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.358331   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:03.354529   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.355199   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.356810   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.357258   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.358331   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:03.361965  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:03.361976  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:03.424809  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:03.424834  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:05.957140  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:05.968183  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:05.968233  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:05.994237  167468 cri.go:89] found id: ""
	I1009 19:14:05.994255  167468 logs.go:282] 0 containers: []
	W1009 19:14:05.994263  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:05.994268  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:05.994316  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:06.023106  167468 cri.go:89] found id: ""
	I1009 19:14:06.023124  167468 logs.go:282] 0 containers: []
	W1009 19:14:06.023131  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:06.023136  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:06.023194  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:06.049764  167468 cri.go:89] found id: ""
	I1009 19:14:06.049780  167468 logs.go:282] 0 containers: []
	W1009 19:14:06.049786  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:06.049790  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:06.049838  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:06.077023  167468 cri.go:89] found id: ""
	I1009 19:14:06.077038  167468 logs.go:282] 0 containers: []
	W1009 19:14:06.077044  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:06.077049  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:06.077097  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:06.105013  167468 cri.go:89] found id: ""
	I1009 19:14:06.105029  167468 logs.go:282] 0 containers: []
	W1009 19:14:06.105035  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:06.105040  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:06.105089  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:06.132736  167468 cri.go:89] found id: ""
	I1009 19:14:06.132754  167468 logs.go:282] 0 containers: []
	W1009 19:14:06.132761  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:06.132766  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:06.132813  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:06.160441  167468 cri.go:89] found id: ""
	I1009 19:14:06.160459  167468 logs.go:282] 0 containers: []
	W1009 19:14:06.160467  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:06.160477  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:06.160493  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:06.230865  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:06.230891  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:06.243543  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:06.243563  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:06.302803  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:06.294756   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.295321   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.296956   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.297533   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.299112   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:06.294756   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.295321   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.296956   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.297533   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.299112   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:06.302821  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:06.302836  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:06.363249  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:06.363274  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:08.896321  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:08.907567  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:08.907629  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:08.935200  167468 cri.go:89] found id: ""
	I1009 19:14:08.935217  167468 logs.go:282] 0 containers: []
	W1009 19:14:08.935224  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:08.935229  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:08.935279  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:08.962910  167468 cri.go:89] found id: ""
	I1009 19:14:08.962930  167468 logs.go:282] 0 containers: []
	W1009 19:14:08.962939  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:08.962945  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:08.963017  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:08.990218  167468 cri.go:89] found id: ""
	I1009 19:14:08.990235  167468 logs.go:282] 0 containers: []
	W1009 19:14:08.990252  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:08.990258  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:08.990306  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:09.015799  167468 cri.go:89] found id: ""
	I1009 19:14:09.015815  167468 logs.go:282] 0 containers: []
	W1009 19:14:09.015822  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:09.015826  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:09.015875  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:09.042470  167468 cri.go:89] found id: ""
	I1009 19:14:09.042485  167468 logs.go:282] 0 containers: []
	W1009 19:14:09.042492  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:09.042497  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:09.042553  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:09.069980  167468 cri.go:89] found id: ""
	I1009 19:14:09.069996  167468 logs.go:282] 0 containers: []
	W1009 19:14:09.070006  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:09.070011  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:09.070062  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:09.097327  167468 cri.go:89] found id: ""
	I1009 19:14:09.097347  167468 logs.go:282] 0 containers: []
	W1009 19:14:09.097358  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:09.097369  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:09.097395  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:09.166588  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:09.166613  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:09.179033  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:09.179053  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:09.237875  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:09.230485   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.231039   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.232636   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.233112   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.234282   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:09.230485   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.231039   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.232636   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.233112   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.234282   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:09.237886  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:09.237896  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:09.297149  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:09.297173  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:11.829632  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:11.841003  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:11.841054  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:11.868151  167468 cri.go:89] found id: ""
	I1009 19:14:11.868168  167468 logs.go:282] 0 containers: []
	W1009 19:14:11.868175  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:11.868181  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:11.868229  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:11.894303  167468 cri.go:89] found id: ""
	I1009 19:14:11.894319  167468 logs.go:282] 0 containers: []
	W1009 19:14:11.894325  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:11.894333  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:11.894406  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:11.921553  167468 cri.go:89] found id: ""
	I1009 19:14:11.921569  167468 logs.go:282] 0 containers: []
	W1009 19:14:11.921576  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:11.921582  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:11.921640  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:11.948362  167468 cri.go:89] found id: ""
	I1009 19:14:11.948392  167468 logs.go:282] 0 containers: []
	W1009 19:14:11.948404  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:11.948410  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:11.948463  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:11.975053  167468 cri.go:89] found id: ""
	I1009 19:14:11.975074  167468 logs.go:282] 0 containers: []
	W1009 19:14:11.975082  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:11.975090  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:11.975147  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:12.002192  167468 cri.go:89] found id: ""
	I1009 19:14:12.002206  167468 logs.go:282] 0 containers: []
	W1009 19:14:12.002212  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:12.002217  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:12.002263  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:12.029135  167468 cri.go:89] found id: ""
	I1009 19:14:12.029150  167468 logs.go:282] 0 containers: []
	W1009 19:14:12.029156  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:12.029165  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:12.029231  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:12.089147  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:12.089168  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:12.123009  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:12.123029  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:12.194542  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:12.194566  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:12.207426  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:12.207447  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:12.268201  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:12.260274   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.260836   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.262548   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.263082   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.264595   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:12.260274   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.260836   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.262548   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.263082   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.264595   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:14.768939  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:14.779994  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:14.780055  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:14.806625  167468 cri.go:89] found id: ""
	I1009 19:14:14.806642  167468 logs.go:282] 0 containers: []
	W1009 19:14:14.806648  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:14.806653  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:14.806709  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:14.834144  167468 cri.go:89] found id: ""
	I1009 19:14:14.834161  167468 logs.go:282] 0 containers: []
	W1009 19:14:14.834168  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:14.834173  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:14.834217  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:14.859842  167468 cri.go:89] found id: ""
	I1009 19:14:14.859857  167468 logs.go:282] 0 containers: []
	W1009 19:14:14.859863  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:14.859868  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:14.859915  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:14.886983  167468 cri.go:89] found id: ""
	I1009 19:14:14.887002  167468 logs.go:282] 0 containers: []
	W1009 19:14:14.887011  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:14.887017  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:14.887077  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:14.915279  167468 cri.go:89] found id: ""
	I1009 19:14:14.915297  167468 logs.go:282] 0 containers: []
	W1009 19:14:14.915304  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:14.915310  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:14.915367  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:14.943496  167468 cri.go:89] found id: ""
	I1009 19:14:14.943515  167468 logs.go:282] 0 containers: []
	W1009 19:14:14.943522  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:14.943527  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:14.943576  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:14.971449  167468 cri.go:89] found id: ""
	I1009 19:14:14.971466  167468 logs.go:282] 0 containers: []
	W1009 19:14:14.971472  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:14.971481  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:14.971492  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:15.002283  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:15.002302  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:15.068728  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:15.068752  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:15.080899  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:15.080916  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:15.141200  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:15.133517   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.134060   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.135645   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.136103   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.137648   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:15.133517   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.134060   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.135645   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.136103   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.137648   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:15.141211  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:15.141222  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:17.703757  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:17.715432  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:17.715488  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:17.742801  167468 cri.go:89] found id: ""
	I1009 19:14:17.742818  167468 logs.go:282] 0 containers: []
	W1009 19:14:17.742825  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:17.742831  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:17.742894  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:17.770041  167468 cri.go:89] found id: ""
	I1009 19:14:17.770058  167468 logs.go:282] 0 containers: []
	W1009 19:14:17.770067  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:17.770074  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:17.770123  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:17.798373  167468 cri.go:89] found id: ""
	I1009 19:14:17.798401  167468 logs.go:282] 0 containers: []
	W1009 19:14:17.798410  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:17.798416  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:17.798467  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:17.826589  167468 cri.go:89] found id: ""
	I1009 19:14:17.826607  167468 logs.go:282] 0 containers: []
	W1009 19:14:17.826613  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:17.826619  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:17.826668  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:17.853849  167468 cri.go:89] found id: ""
	I1009 19:14:17.853870  167468 logs.go:282] 0 containers: []
	W1009 19:14:17.853879  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:17.853886  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:17.853940  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:17.880708  167468 cri.go:89] found id: ""
	I1009 19:14:17.880728  167468 logs.go:282] 0 containers: []
	W1009 19:14:17.880738  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:17.880745  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:17.880801  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:17.907949  167468 cri.go:89] found id: ""
	I1009 19:14:17.907970  167468 logs.go:282] 0 containers: []
	W1009 19:14:17.907980  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:17.907990  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:17.908000  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:17.977368  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:17.977398  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:17.989589  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:17.989607  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:18.048403  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:18.040956   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.041628   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.043275   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.043797   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.044915   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:18.040956   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.041628   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.043275   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.043797   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.044915   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:18.048425  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:18.048436  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:18.109745  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:18.109768  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:20.641770  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:20.652651  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:20.652706  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:20.680068  167468 cri.go:89] found id: ""
	I1009 19:14:20.680087  167468 logs.go:282] 0 containers: []
	W1009 19:14:20.680097  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:20.680104  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:20.680154  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:20.707239  167468 cri.go:89] found id: ""
	I1009 19:14:20.707258  167468 logs.go:282] 0 containers: []
	W1009 19:14:20.707265  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:20.707270  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:20.707326  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:20.735326  167468 cri.go:89] found id: ""
	I1009 19:14:20.735344  167468 logs.go:282] 0 containers: []
	W1009 19:14:20.735354  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:20.735361  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:20.735435  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:20.761699  167468 cri.go:89] found id: ""
	I1009 19:14:20.761716  167468 logs.go:282] 0 containers: []
	W1009 19:14:20.761723  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:20.761730  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:20.761779  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:20.789487  167468 cri.go:89] found id: ""
	I1009 19:14:20.789503  167468 logs.go:282] 0 containers: []
	W1009 19:14:20.789510  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:20.789515  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:20.789564  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:20.815048  167468 cri.go:89] found id: ""
	I1009 19:14:20.815068  167468 logs.go:282] 0 containers: []
	W1009 19:14:20.815077  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:20.815085  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:20.815133  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:20.841854  167468 cri.go:89] found id: ""
	I1009 19:14:20.841869  167468 logs.go:282] 0 containers: []
	W1009 19:14:20.841876  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:20.841884  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:20.841897  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:20.902143  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:20.893674   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.894242   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.895810   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.896216   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.898541   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:20.893674   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.894242   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.895810   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.896216   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.898541   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:20.902156  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:20.902168  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:20.963057  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:20.963081  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:20.994033  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:20.994052  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:21.059710  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:21.059732  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:23.573543  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:23.585055  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:23.585120  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:23.611298  167468 cri.go:89] found id: ""
	I1009 19:14:23.611316  167468 logs.go:282] 0 containers: []
	W1009 19:14:23.611327  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:23.611334  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:23.611403  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:23.639797  167468 cri.go:89] found id: ""
	I1009 19:14:23.639813  167468 logs.go:282] 0 containers: []
	W1009 19:14:23.639822  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:23.639828  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:23.639894  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:23.667001  167468 cri.go:89] found id: ""
	I1009 19:14:23.667016  167468 logs.go:282] 0 containers: []
	W1009 19:14:23.667023  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:23.667028  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:23.667073  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:23.693487  167468 cri.go:89] found id: ""
	I1009 19:14:23.693502  167468 logs.go:282] 0 containers: []
	W1009 19:14:23.693510  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:23.693514  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:23.693565  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:23.720512  167468 cri.go:89] found id: ""
	I1009 19:14:23.720527  167468 logs.go:282] 0 containers: []
	W1009 19:14:23.720533  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:23.720538  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:23.720585  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:23.748368  167468 cri.go:89] found id: ""
	I1009 19:14:23.748408  167468 logs.go:282] 0 containers: []
	W1009 19:14:23.748418  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:23.748425  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:23.748488  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:23.776610  167468 cri.go:89] found id: ""
	I1009 19:14:23.776626  167468 logs.go:282] 0 containers: []
	W1009 19:14:23.776634  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:23.776681  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:23.776697  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:23.847110  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:23.847133  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:23.860359  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:23.860390  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:23.920518  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:23.912620   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.913240   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.914784   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.915304   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.916845   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:23.912620   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.913240   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.914784   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.915304   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.916845   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:23.920529  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:23.920541  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:23.985060  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:23.985084  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:26.518171  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:26.529182  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:26.529244  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:26.555907  167468 cri.go:89] found id: ""
	I1009 19:14:26.555925  167468 logs.go:282] 0 containers: []
	W1009 19:14:26.555936  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:26.555942  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:26.555992  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:26.583126  167468 cri.go:89] found id: ""
	I1009 19:14:26.583144  167468 logs.go:282] 0 containers: []
	W1009 19:14:26.583155  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:26.583162  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:26.583223  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:26.609859  167468 cri.go:89] found id: ""
	I1009 19:14:26.609880  167468 logs.go:282] 0 containers: []
	W1009 19:14:26.609889  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:26.609894  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:26.609949  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:26.635864  167468 cri.go:89] found id: ""
	I1009 19:14:26.635883  167468 logs.go:282] 0 containers: []
	W1009 19:14:26.635890  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:26.635895  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:26.635978  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:26.663639  167468 cri.go:89] found id: ""
	I1009 19:14:26.663658  167468 logs.go:282] 0 containers: []
	W1009 19:14:26.663664  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:26.663670  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:26.663718  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:26.690743  167468 cri.go:89] found id: ""
	I1009 19:14:26.690759  167468 logs.go:282] 0 containers: []
	W1009 19:14:26.690766  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:26.690772  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:26.690830  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:26.718602  167468 cri.go:89] found id: ""
	I1009 19:14:26.718621  167468 logs.go:282] 0 containers: []
	W1009 19:14:26.718627  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:26.718636  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:26.718646  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:26.789980  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:26.790003  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:26.802817  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:26.802837  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:26.861119  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:26.853689   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.854304   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.855781   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.856245   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.857603   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:26.853689   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.854304   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.855781   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.856245   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.857603   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:26.861132  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:26.861144  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:26.923808  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:26.923846  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:29.457408  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:29.468649  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:29.468701  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:29.496077  167468 cri.go:89] found id: ""
	I1009 19:14:29.496093  167468 logs.go:282] 0 containers: []
	W1009 19:14:29.496099  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:29.496105  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:29.496153  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:29.523269  167468 cri.go:89] found id: ""
	I1009 19:14:29.523286  167468 logs.go:282] 0 containers: []
	W1009 19:14:29.523294  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:29.523299  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:29.523354  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:29.551202  167468 cri.go:89] found id: ""
	I1009 19:14:29.551218  167468 logs.go:282] 0 containers: []
	W1009 19:14:29.551224  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:29.551229  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:29.551277  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:29.578618  167468 cri.go:89] found id: ""
	I1009 19:14:29.578633  167468 logs.go:282] 0 containers: []
	W1009 19:14:29.578640  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:29.578645  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:29.578699  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:29.605239  167468 cri.go:89] found id: ""
	I1009 19:14:29.605257  167468 logs.go:282] 0 containers: []
	W1009 19:14:29.605267  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:29.605273  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:29.605320  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:29.632558  167468 cri.go:89] found id: ""
	I1009 19:14:29.632581  167468 logs.go:282] 0 containers: []
	W1009 19:14:29.632589  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:29.632595  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:29.632644  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:29.660045  167468 cri.go:89] found id: ""
	I1009 19:14:29.660061  167468 logs.go:282] 0 containers: []
	W1009 19:14:29.660067  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:29.660076  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:29.660087  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:29.689848  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:29.689866  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:29.759204  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:29.759227  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:29.771334  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:29.771352  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:29.830651  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:29.823435   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.824026   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.825599   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.826136   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.827250   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:29.823435   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.824026   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.825599   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.826136   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.827250   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:29.830667  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:29.830678  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:32.393048  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:32.405075  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:32.405143  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:32.434099  167468 cri.go:89] found id: ""
	I1009 19:14:32.434119  167468 logs.go:282] 0 containers: []
	W1009 19:14:32.434136  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:32.434141  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:32.434199  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:32.461266  167468 cri.go:89] found id: ""
	I1009 19:14:32.461294  167468 logs.go:282] 0 containers: []
	W1009 19:14:32.461304  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:32.461310  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:32.461361  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:32.488620  167468 cri.go:89] found id: ""
	I1009 19:14:32.488636  167468 logs.go:282] 0 containers: []
	W1009 19:14:32.488644  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:32.488649  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:32.488696  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:32.515907  167468 cri.go:89] found id: ""
	I1009 19:14:32.515924  167468 logs.go:282] 0 containers: []
	W1009 19:14:32.515931  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:32.515936  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:32.515984  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:32.543671  167468 cri.go:89] found id: ""
	I1009 19:14:32.543690  167468 logs.go:282] 0 containers: []
	W1009 19:14:32.543697  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:32.543703  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:32.543751  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:32.571189  167468 cri.go:89] found id: ""
	I1009 19:14:32.571205  167468 logs.go:282] 0 containers: []
	W1009 19:14:32.571211  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:32.571216  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:32.571261  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:32.598521  167468 cri.go:89] found id: ""
	I1009 19:14:32.598539  167468 logs.go:282] 0 containers: []
	W1009 19:14:32.598546  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:32.598554  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:32.598565  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:32.663582  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:32.663609  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:32.675873  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:32.675891  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:32.735973  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:32.728326   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.728914   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.730601   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.731110   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.732693   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:32.728326   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.728914   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.730601   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.731110   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.732693   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:32.735984  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:32.735995  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:32.799860  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:32.799882  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:35.330659  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:35.341858  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:35.341908  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:35.369356  167468 cri.go:89] found id: ""
	I1009 19:14:35.369371  167468 logs.go:282] 0 containers: []
	W1009 19:14:35.369396  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:35.369403  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:35.369454  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:35.397530  167468 cri.go:89] found id: ""
	I1009 19:14:35.397549  167468 logs.go:282] 0 containers: []
	W1009 19:14:35.397556  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:35.397561  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:35.397613  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:35.425543  167468 cri.go:89] found id: ""
	I1009 19:14:35.425565  167468 logs.go:282] 0 containers: []
	W1009 19:14:35.425572  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:35.425577  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:35.425629  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:35.451820  167468 cri.go:89] found id: ""
	I1009 19:14:35.451912  167468 logs.go:282] 0 containers: []
	W1009 19:14:35.451924  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:35.451932  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:35.452003  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:35.479131  167468 cri.go:89] found id: ""
	I1009 19:14:35.479149  167468 logs.go:282] 0 containers: []
	W1009 19:14:35.479158  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:35.479165  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:35.479226  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:35.507763  167468 cri.go:89] found id: ""
	I1009 19:14:35.507793  167468 logs.go:282] 0 containers: []
	W1009 19:14:35.507802  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:35.507807  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:35.507856  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:35.536306  167468 cri.go:89] found id: ""
	I1009 19:14:35.536323  167468 logs.go:282] 0 containers: []
	W1009 19:14:35.536329  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:35.536337  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:35.536348  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:35.602873  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:35.602895  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:35.615060  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:35.615079  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:35.674681  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:35.666563   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.667233   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.668881   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.669447   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.671017   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:35.666563   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.667233   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.668881   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.669447   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.671017   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:35.674694  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:35.674705  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:35.738408  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:35.738431  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:38.270303  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:38.281687  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:38.281748  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:38.309100  167468 cri.go:89] found id: ""
	I1009 19:14:38.309115  167468 logs.go:282] 0 containers: []
	W1009 19:14:38.309121  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:38.309127  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:38.309175  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:38.337672  167468 cri.go:89] found id: ""
	I1009 19:14:38.337689  167468 logs.go:282] 0 containers: []
	W1009 19:14:38.337697  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:38.337702  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:38.337757  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:38.366315  167468 cri.go:89] found id: ""
	I1009 19:14:38.366331  167468 logs.go:282] 0 containers: []
	W1009 19:14:38.366338  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:38.366343  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:38.366410  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:38.394168  167468 cri.go:89] found id: ""
	I1009 19:14:38.394184  167468 logs.go:282] 0 containers: []
	W1009 19:14:38.394191  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:38.394195  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:38.394249  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:38.422647  167468 cri.go:89] found id: ""
	I1009 19:14:38.422667  167468 logs.go:282] 0 containers: []
	W1009 19:14:38.422678  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:38.422685  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:38.422772  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:38.452008  167468 cri.go:89] found id: ""
	I1009 19:14:38.452026  167468 logs.go:282] 0 containers: []
	W1009 19:14:38.452033  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:38.452038  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:38.452099  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:38.480564  167468 cri.go:89] found id: ""
	I1009 19:14:38.480586  167468 logs.go:282] 0 containers: []
	W1009 19:14:38.480597  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:38.480607  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:38.480624  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:38.547918  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:38.547950  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:38.559951  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:38.559971  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:38.618131  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:38.610538   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.611169   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.612854   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.613360   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.614757   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:38.610538   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.611169   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.612854   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.613360   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.614757   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:38.618142  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:38.618153  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:38.682619  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:38.682643  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:41.214700  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:41.225692  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:41.225744  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:41.252521  167468 cri.go:89] found id: ""
	I1009 19:14:41.252537  167468 logs.go:282] 0 containers: []
	W1009 19:14:41.252543  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:41.252548  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:41.252598  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:41.280073  167468 cri.go:89] found id: ""
	I1009 19:14:41.280090  167468 logs.go:282] 0 containers: []
	W1009 19:14:41.280095  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:41.280100  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:41.280147  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:41.307469  167468 cri.go:89] found id: ""
	I1009 19:14:41.307490  167468 logs.go:282] 0 containers: []
	W1009 19:14:41.307499  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:41.307505  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:41.307554  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:41.334966  167468 cri.go:89] found id: ""
	I1009 19:14:41.334982  167468 logs.go:282] 0 containers: []
	W1009 19:14:41.334991  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:41.334998  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:41.335060  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:41.362582  167468 cri.go:89] found id: ""
	I1009 19:14:41.362600  167468 logs.go:282] 0 containers: []
	W1009 19:14:41.362607  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:41.362612  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:41.362667  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:41.390351  167468 cri.go:89] found id: ""
	I1009 19:14:41.390369  167468 logs.go:282] 0 containers: []
	W1009 19:14:41.390390  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:41.390397  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:41.390453  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:41.417390  167468 cri.go:89] found id: ""
	I1009 19:14:41.417410  167468 logs.go:282] 0 containers: []
	W1009 19:14:41.417418  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:41.417428  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:41.417438  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:41.484701  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:41.484724  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:41.497051  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:41.497068  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:41.555902  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:41.548817   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.549403   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.550938   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.551329   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.552636   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:41.548817   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.549403   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.550938   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.551329   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.552636   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:41.555915  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:41.555927  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:41.618927  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:41.618950  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:44.151566  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:44.162952  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:44.163024  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:44.188939  167468 cri.go:89] found id: ""
	I1009 19:14:44.188954  167468 logs.go:282] 0 containers: []
	W1009 19:14:44.188962  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:44.188969  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:44.189053  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:44.216484  167468 cri.go:89] found id: ""
	I1009 19:14:44.216504  167468 logs.go:282] 0 containers: []
	W1009 19:14:44.216514  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:44.216520  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:44.216575  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:44.244062  167468 cri.go:89] found id: ""
	I1009 19:14:44.244079  167468 logs.go:282] 0 containers: []
	W1009 19:14:44.244089  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:44.244096  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:44.244164  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:44.272014  167468 cri.go:89] found id: ""
	I1009 19:14:44.272031  167468 logs.go:282] 0 containers: []
	W1009 19:14:44.272040  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:44.272047  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:44.272099  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:44.298566  167468 cri.go:89] found id: ""
	I1009 19:14:44.298584  167468 logs.go:282] 0 containers: []
	W1009 19:14:44.298598  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:44.298605  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:44.298666  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:44.327273  167468 cri.go:89] found id: ""
	I1009 19:14:44.327290  167468 logs.go:282] 0 containers: []
	W1009 19:14:44.327297  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:44.327302  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:44.327352  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:44.354325  167468 cri.go:89] found id: ""
	I1009 19:14:44.354341  167468 logs.go:282] 0 containers: []
	W1009 19:14:44.354347  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:44.354356  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:44.354367  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:44.413429  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:44.405599   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.406160   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.407858   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.408392   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.409925   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:44.405599   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.406160   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.407858   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.408392   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.409925   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:44.413442  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:44.413453  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:44.473888  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:44.473911  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:44.506171  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:44.506189  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:44.572347  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:44.572369  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:47.086686  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:47.098491  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:47.098552  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:47.129087  167468 cri.go:89] found id: ""
	I1009 19:14:47.129104  167468 logs.go:282] 0 containers: []
	W1009 19:14:47.129111  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:47.129116  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:47.129163  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:47.157143  167468 cri.go:89] found id: ""
	I1009 19:14:47.157161  167468 logs.go:282] 0 containers: []
	W1009 19:14:47.157167  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:47.157172  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:47.157223  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:47.184337  167468 cri.go:89] found id: ""
	I1009 19:14:47.184352  167468 logs.go:282] 0 containers: []
	W1009 19:14:47.184358  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:47.184365  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:47.184429  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:47.213264  167468 cri.go:89] found id: ""
	I1009 19:14:47.213280  167468 logs.go:282] 0 containers: []
	W1009 19:14:47.213291  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:47.213298  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:47.213356  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:47.240766  167468 cri.go:89] found id: ""
	I1009 19:14:47.240786  167468 logs.go:282] 0 containers: []
	W1009 19:14:47.240793  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:47.240798  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:47.240847  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:47.267656  167468 cri.go:89] found id: ""
	I1009 19:14:47.267675  167468 logs.go:282] 0 containers: []
	W1009 19:14:47.267686  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:47.267692  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:47.267760  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:47.297799  167468 cri.go:89] found id: ""
	I1009 19:14:47.297817  167468 logs.go:282] 0 containers: []
	W1009 19:14:47.297826  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:47.297837  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:47.297848  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:47.328303  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:47.328319  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:47.398644  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:47.398668  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:47.411075  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:47.411098  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:47.470237  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:47.462608   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.463190   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.464787   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.465180   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.466459   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:47.462608   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.463190   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.464787   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.465180   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.466459   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:47.470247  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:47.470260  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:50.035757  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:50.047268  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:50.047318  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:50.074626  167468 cri.go:89] found id: ""
	I1009 19:14:50.074644  167468 logs.go:282] 0 containers: []
	W1009 19:14:50.074653  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:50.074659  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:50.074726  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:50.101587  167468 cri.go:89] found id: ""
	I1009 19:14:50.101606  167468 logs.go:282] 0 containers: []
	W1009 19:14:50.101616  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:50.101622  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:50.101689  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:50.128912  167468 cri.go:89] found id: ""
	I1009 19:14:50.128964  167468 logs.go:282] 0 containers: []
	W1009 19:14:50.128983  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:50.128992  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:50.129079  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:50.157233  167468 cri.go:89] found id: ""
	I1009 19:14:50.157253  167468 logs.go:282] 0 containers: []
	W1009 19:14:50.157261  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:50.157266  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:50.157319  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:50.185689  167468 cri.go:89] found id: ""
	I1009 19:14:50.185716  167468 logs.go:282] 0 containers: []
	W1009 19:14:50.185725  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:50.185731  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:50.185792  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:50.213094  167468 cri.go:89] found id: ""
	I1009 19:14:50.213111  167468 logs.go:282] 0 containers: []
	W1009 19:14:50.213120  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:50.213128  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:50.213182  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:50.241332  167468 cri.go:89] found id: ""
	I1009 19:14:50.241348  167468 logs.go:282] 0 containers: []
	W1009 19:14:50.241355  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:50.241364  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:50.241393  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:50.302370  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:50.293815   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.294883   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.296524   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.296998   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.298663   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:50.293815   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.294883   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.296524   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.296998   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.298663   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:50.302398  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:50.302412  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:50.365923  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:50.365946  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:50.396814  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:50.396831  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:50.465484  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:50.465506  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:52.979572  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:52.990584  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:52.990647  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:53.017772  167468 cri.go:89] found id: ""
	I1009 19:14:53.017788  167468 logs.go:282] 0 containers: []
	W1009 19:14:53.017795  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:53.017799  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:53.017848  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:53.043918  167468 cri.go:89] found id: ""
	I1009 19:14:53.043945  167468 logs.go:282] 0 containers: []
	W1009 19:14:53.043952  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:53.043957  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:53.044008  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:53.072767  167468 cri.go:89] found id: ""
	I1009 19:14:53.072786  167468 logs.go:282] 0 containers: []
	W1009 19:14:53.072795  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:53.072802  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:53.072854  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:53.100586  167468 cri.go:89] found id: ""
	I1009 19:14:53.100602  167468 logs.go:282] 0 containers: []
	W1009 19:14:53.100608  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:53.100613  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:53.100660  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:53.127701  167468 cri.go:89] found id: ""
	I1009 19:14:53.127720  167468 logs.go:282] 0 containers: []
	W1009 19:14:53.127727  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:53.127732  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:53.127779  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:53.155552  167468 cri.go:89] found id: ""
	I1009 19:14:53.155571  167468 logs.go:282] 0 containers: []
	W1009 19:14:53.155578  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:53.155583  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:53.155640  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:53.183112  167468 cri.go:89] found id: ""
	I1009 19:14:53.183128  167468 logs.go:282] 0 containers: []
	W1009 19:14:53.183144  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:53.183156  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:53.183171  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:53.243405  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:53.235518   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.236187   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.237791   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.238263   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.239863   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:53.235518   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.236187   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.237791   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.238263   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.239863   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:53.243416  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:53.243427  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:53.305606  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:53.305630  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:53.335326  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:53.335345  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:53.403649  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:53.403673  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:55.918864  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:55.930447  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:55.930507  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:55.957185  167468 cri.go:89] found id: ""
	I1009 19:14:55.957201  167468 logs.go:282] 0 containers: []
	W1009 19:14:55.957207  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:55.957213  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:55.957265  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:55.984214  167468 cri.go:89] found id: ""
	I1009 19:14:55.984231  167468 logs.go:282] 0 containers: []
	W1009 19:14:55.984237  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:55.984243  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:55.984307  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:56.013635  167468 cri.go:89] found id: ""
	I1009 19:14:56.013654  167468 logs.go:282] 0 containers: []
	W1009 19:14:56.013663  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:56.013671  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:56.013735  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:56.040775  167468 cri.go:89] found id: ""
	I1009 19:14:56.040792  167468 logs.go:282] 0 containers: []
	W1009 19:14:56.040798  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:56.040803  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:56.040849  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:56.066866  167468 cri.go:89] found id: ""
	I1009 19:14:56.066887  167468 logs.go:282] 0 containers: []
	W1009 19:14:56.066893  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:56.066900  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:56.066971  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:56.096224  167468 cri.go:89] found id: ""
	I1009 19:14:56.096240  167468 logs.go:282] 0 containers: []
	W1009 19:14:56.096247  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:56.096252  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:56.096300  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:56.123522  167468 cri.go:89] found id: ""
	I1009 19:14:56.123537  167468 logs.go:282] 0 containers: []
	W1009 19:14:56.123544  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:56.123552  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:56.123566  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:56.191640  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:56.191666  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:56.203892  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:56.203912  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:56.261630  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:56.253807   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.254343   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.256028   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.256654   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.258265   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:56.253807   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.254343   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.256028   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.256654   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.258265   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:56.261649  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:56.261663  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:56.326722  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:56.326745  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:58.857655  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:58.868964  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:58.869018  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:58.895416  167468 cri.go:89] found id: ""
	I1009 19:14:58.895434  167468 logs.go:282] 0 containers: []
	W1009 19:14:58.895441  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:58.895453  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:58.895511  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:58.922319  167468 cri.go:89] found id: ""
	I1009 19:14:58.922335  167468 logs.go:282] 0 containers: []
	W1009 19:14:58.922343  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:58.922348  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:58.922416  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:58.949902  167468 cri.go:89] found id: ""
	I1009 19:14:58.949918  167468 logs.go:282] 0 containers: []
	W1009 19:14:58.949925  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:58.949930  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:58.949978  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:58.978366  167468 cri.go:89] found id: ""
	I1009 19:14:58.978402  167468 logs.go:282] 0 containers: []
	W1009 19:14:58.978412  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:58.978418  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:58.978481  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:59.004783  167468 cri.go:89] found id: ""
	I1009 19:14:59.004802  167468 logs.go:282] 0 containers: []
	W1009 19:14:59.004812  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:59.004818  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:59.004875  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:59.031676  167468 cri.go:89] found id: ""
	I1009 19:14:59.031692  167468 logs.go:282] 0 containers: []
	W1009 19:14:59.031699  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:59.031704  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:59.031764  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:59.058880  167468 cri.go:89] found id: ""
	I1009 19:14:59.058899  167468 logs.go:282] 0 containers: []
	W1009 19:14:59.058909  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:59.058920  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:59.058933  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:59.117247  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:59.109634   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.110238   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.111830   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.112295   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.113884   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:59.109634   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.110238   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.111830   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.112295   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.113884   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:59.117261  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:59.117273  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:59.181757  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:59.181781  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:59.211839  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:59.211857  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:59.278338  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:59.278360  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:01.792200  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:01.803290  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:15:01.803341  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:15:01.830551  167468 cri.go:89] found id: ""
	I1009 19:15:01.830568  167468 logs.go:282] 0 containers: []
	W1009 19:15:01.830577  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:15:01.830584  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:15:01.830632  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:15:01.858835  167468 cri.go:89] found id: ""
	I1009 19:15:01.858853  167468 logs.go:282] 0 containers: []
	W1009 19:15:01.858859  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:15:01.858864  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:15:01.858910  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:15:01.885090  167468 cri.go:89] found id: ""
	I1009 19:15:01.885111  167468 logs.go:282] 0 containers: []
	W1009 19:15:01.885120  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:15:01.885127  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:15:01.885175  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:15:01.911802  167468 cri.go:89] found id: ""
	I1009 19:15:01.911819  167468 logs.go:282] 0 containers: []
	W1009 19:15:01.911827  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:15:01.911832  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:15:01.911880  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:15:01.938892  167468 cri.go:89] found id: ""
	I1009 19:15:01.938909  167468 logs.go:282] 0 containers: []
	W1009 19:15:01.938916  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:15:01.938927  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:15:01.938977  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:15:01.966243  167468 cri.go:89] found id: ""
	I1009 19:15:01.966259  167468 logs.go:282] 0 containers: []
	W1009 19:15:01.966265  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:15:01.966270  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:15:01.966320  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:15:01.993984  167468 cri.go:89] found id: ""
	I1009 19:15:01.994000  167468 logs.go:282] 0 containers: []
	W1009 19:15:01.994023  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:15:01.994032  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:15:01.994044  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:02.006125  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:15:02.006144  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:15:02.064780  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:15:02.057286   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.057806   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.059460   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.059974   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.061129   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:15:02.057286   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.057806   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.059460   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.059974   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.061129   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:15:02.064797  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:15:02.064810  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:15:02.134945  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:15:02.134968  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:15:02.165969  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:15:02.165989  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:15:04.734526  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:04.746112  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:15:04.746199  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:15:04.773650  167468 cri.go:89] found id: ""
	I1009 19:15:04.773669  167468 logs.go:282] 0 containers: []
	W1009 19:15:04.773680  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:15:04.773687  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:15:04.773748  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:15:04.800778  167468 cri.go:89] found id: ""
	I1009 19:15:04.800795  167468 logs.go:282] 0 containers: []
	W1009 19:15:04.800802  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:15:04.800807  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:15:04.800854  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:15:04.828717  167468 cri.go:89] found id: ""
	I1009 19:15:04.828734  167468 logs.go:282] 0 containers: []
	W1009 19:15:04.828741  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:15:04.828746  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:15:04.828809  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:15:04.856797  167468 cri.go:89] found id: ""
	I1009 19:15:04.856814  167468 logs.go:282] 0 containers: []
	W1009 19:15:04.856821  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:15:04.856826  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:15:04.856885  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:15:04.884077  167468 cri.go:89] found id: ""
	I1009 19:15:04.884099  167468 logs.go:282] 0 containers: []
	W1009 19:15:04.884110  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:15:04.884116  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:15:04.884164  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:15:04.911551  167468 cri.go:89] found id: ""
	I1009 19:15:04.911571  167468 logs.go:282] 0 containers: []
	W1009 19:15:04.911581  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:15:04.911588  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:15:04.911641  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:15:04.939637  167468 cri.go:89] found id: ""
	I1009 19:15:04.939656  167468 logs.go:282] 0 containers: []
	W1009 19:15:04.939665  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:15:04.939676  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:15:04.939691  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:15:05.000397  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:15:04.992804   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.993434   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.995032   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.995550   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.997065   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:15:04.992804   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.993434   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.995032   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.995550   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.997065   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:15:05.000414  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:15:05.000427  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:15:05.062558  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:15:05.062582  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:15:05.095113  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:15:05.095134  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:15:05.167688  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:15:05.167712  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:07.681917  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:07.692856  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:15:07.692912  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:15:07.720408  167468 cri.go:89] found id: ""
	I1009 19:15:07.720425  167468 logs.go:282] 0 containers: []
	W1009 19:15:07.720431  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:15:07.720436  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:15:07.720485  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:15:07.748034  167468 cri.go:89] found id: ""
	I1009 19:15:07.748055  167468 logs.go:282] 0 containers: []
	W1009 19:15:07.748064  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:15:07.748070  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:15:07.748124  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:15:07.775843  167468 cri.go:89] found id: ""
	I1009 19:15:07.775858  167468 logs.go:282] 0 containers: []
	W1009 19:15:07.775865  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:15:07.775870  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:15:07.775930  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:15:07.803455  167468 cri.go:89] found id: ""
	I1009 19:15:07.803475  167468 logs.go:282] 0 containers: []
	W1009 19:15:07.803485  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:15:07.803492  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:15:07.803543  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:15:07.831128  167468 cri.go:89] found id: ""
	I1009 19:15:07.831145  167468 logs.go:282] 0 containers: []
	W1009 19:15:07.831152  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:15:07.831157  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:15:07.831207  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:15:07.858576  167468 cri.go:89] found id: ""
	I1009 19:15:07.858594  167468 logs.go:282] 0 containers: []
	W1009 19:15:07.858601  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:15:07.858606  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:15:07.858655  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:15:07.885114  167468 cri.go:89] found id: ""
	I1009 19:15:07.885130  167468 logs.go:282] 0 containers: []
	W1009 19:15:07.885136  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:15:07.885144  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:15:07.885154  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:15:07.951050  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:15:07.951073  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:07.963260  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:15:07.963277  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:15:08.024291  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:15:08.016184   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.016764   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.018467   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.018939   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.020486   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:15:08.016184   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.016764   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.018467   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.018939   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.020486   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:15:08.024308  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:15:08.024321  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:15:08.089308  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:15:08.089331  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:15:10.619798  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:10.631039  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:15:10.631095  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:15:10.658697  167468 cri.go:89] found id: ""
	I1009 19:15:10.658713  167468 logs.go:282] 0 containers: []
	W1009 19:15:10.658720  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:15:10.658728  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:15:10.658784  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:15:10.687176  167468 cri.go:89] found id: ""
	I1009 19:15:10.687195  167468 logs.go:282] 0 containers: []
	W1009 19:15:10.687203  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:15:10.687215  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:15:10.687274  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:15:10.714831  167468 cri.go:89] found id: ""
	I1009 19:15:10.714848  167468 logs.go:282] 0 containers: []
	W1009 19:15:10.714854  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:15:10.714859  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:15:10.714907  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:15:10.742110  167468 cri.go:89] found id: ""
	I1009 19:15:10.742128  167468 logs.go:282] 0 containers: []
	W1009 19:15:10.742135  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:15:10.742142  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:15:10.742191  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:15:10.770141  167468 cri.go:89] found id: ""
	I1009 19:15:10.770157  167468 logs.go:282] 0 containers: []
	W1009 19:15:10.770163  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:15:10.770169  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:15:10.770216  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:15:10.797767  167468 cri.go:89] found id: ""
	I1009 19:15:10.797787  167468 logs.go:282] 0 containers: []
	W1009 19:15:10.797797  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:15:10.797803  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:15:10.797857  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:15:10.825395  167468 cri.go:89] found id: ""
	I1009 19:15:10.825415  167468 logs.go:282] 0 containers: []
	W1009 19:15:10.825425  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:15:10.825436  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:15:10.825456  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:15:10.884784  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:15:10.877121   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.877714   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.879474   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.879980   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.881232   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:15:10.877121   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.877714   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.879474   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.879980   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.881232   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:15:10.884798  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:15:10.884812  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:15:10.949429  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:15:10.949455  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:15:10.980207  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:15:10.980224  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:15:11.045524  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:15:11.045548  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:13.559802  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:13.571007  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:15:13.571059  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:15:13.597407  167468 cri.go:89] found id: ""
	I1009 19:15:13.597424  167468 logs.go:282] 0 containers: []
	W1009 19:15:13.597430  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:15:13.597435  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:15:13.597489  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:15:13.623563  167468 cri.go:89] found id: ""
	I1009 19:15:13.623583  167468 logs.go:282] 0 containers: []
	W1009 19:15:13.623593  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:15:13.623600  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:15:13.623658  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:15:13.649574  167468 cri.go:89] found id: ""
	I1009 19:15:13.649597  167468 logs.go:282] 0 containers: []
	W1009 19:15:13.649606  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:15:13.649611  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:15:13.649660  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:15:13.677161  167468 cri.go:89] found id: ""
	I1009 19:15:13.677176  167468 logs.go:282] 0 containers: []
	W1009 19:15:13.677183  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:15:13.677187  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:15:13.677235  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:15:13.705296  167468 cri.go:89] found id: ""
	I1009 19:15:13.705311  167468 logs.go:282] 0 containers: []
	W1009 19:15:13.705317  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:15:13.705322  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:15:13.705368  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:15:13.732914  167468 cri.go:89] found id: ""
	I1009 19:15:13.732932  167468 logs.go:282] 0 containers: []
	W1009 19:15:13.732955  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:15:13.732961  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:15:13.733033  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:15:13.759867  167468 cri.go:89] found id: ""
	I1009 19:15:13.759883  167468 logs.go:282] 0 containers: []
	W1009 19:15:13.759890  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:15:13.759899  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:15:13.759908  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:15:13.823220  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:15:13.823246  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:15:13.853281  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:15:13.853303  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:15:13.923620  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:15:13.923644  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:13.936705  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:15:13.936724  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:15:13.996614  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:15:13.989060   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.989714   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.991209   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.991732   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.992915   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:15:13.989060   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.989714   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.991209   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.991732   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.992915   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:15:16.498568  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:16.509972  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:15:16.510034  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:15:16.537700  167468 cri.go:89] found id: ""
	I1009 19:15:16.537721  167468 logs.go:282] 0 containers: []
	W1009 19:15:16.537732  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:15:16.537739  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:15:16.537913  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:15:16.565255  167468 cri.go:89] found id: ""
	I1009 19:15:16.565271  167468 logs.go:282] 0 containers: []
	W1009 19:15:16.565277  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:15:16.565282  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:15:16.565328  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:15:16.594281  167468 cri.go:89] found id: ""
	I1009 19:15:16.594297  167468 logs.go:282] 0 containers: []
	W1009 19:15:16.594304  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:15:16.594309  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:15:16.594368  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:15:16.621490  167468 cri.go:89] found id: ""
	I1009 19:15:16.621508  167468 logs.go:282] 0 containers: []
	W1009 19:15:16.621515  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:15:16.621529  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:15:16.621581  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:15:16.650311  167468 cri.go:89] found id: ""
	I1009 19:15:16.650328  167468 logs.go:282] 0 containers: []
	W1009 19:15:16.650336  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:15:16.650343  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:15:16.650419  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:15:16.679567  167468 cri.go:89] found id: ""
	I1009 19:15:16.679587  167468 logs.go:282] 0 containers: []
	W1009 19:15:16.679595  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:15:16.679602  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:15:16.679650  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:15:16.708807  167468 cri.go:89] found id: ""
	I1009 19:15:16.708823  167468 logs.go:282] 0 containers: []
	W1009 19:15:16.708829  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:15:16.708839  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:15:16.708853  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:15:16.769188  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:15:16.769215  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:15:16.800501  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:15:16.800522  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:15:16.866546  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:15:16.866569  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:16.879721  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:15:16.879740  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:15:16.940801  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:15:16.932610   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.933242   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.935038   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.935548   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.937177   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:15:16.932610   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.933242   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.935038   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.935548   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.937177   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:15:19.441719  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:19.452865  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:15:19.453106  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:15:19.480919  167468 cri.go:89] found id: ""
	I1009 19:15:19.480970  167468 logs.go:282] 0 containers: []
	W1009 19:15:19.480980  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:15:19.480986  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:15:19.481049  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:15:19.508412  167468 cri.go:89] found id: ""
	I1009 19:15:19.508428  167468 logs.go:282] 0 containers: []
	W1009 19:15:19.508435  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:15:19.508439  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:15:19.508505  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:15:19.535889  167468 cri.go:89] found id: ""
	I1009 19:15:19.535906  167468 logs.go:282] 0 containers: []
	W1009 19:15:19.535912  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:15:19.535919  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:15:19.535972  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:15:19.562894  167468 cri.go:89] found id: ""
	I1009 19:15:19.562910  167468 logs.go:282] 0 containers: []
	W1009 19:15:19.562916  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:15:19.562923  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:15:19.562982  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:15:19.590804  167468 cri.go:89] found id: ""
	I1009 19:15:19.590820  167468 logs.go:282] 0 containers: []
	W1009 19:15:19.590829  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:15:19.590837  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:15:19.590911  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:15:19.618341  167468 cri.go:89] found id: ""
	I1009 19:15:19.618356  167468 logs.go:282] 0 containers: []
	W1009 19:15:19.618362  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:15:19.618367  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:15:19.618440  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:15:19.646546  167468 cri.go:89] found id: ""
	I1009 19:15:19.646567  167468 logs.go:282] 0 containers: []
	W1009 19:15:19.646573  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:15:19.646581  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:15:19.646595  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:15:19.715578  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:15:19.715601  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:19.727811  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:15:19.727831  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:15:19.788607  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:15:19.780608   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.781186   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.782870   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.783356   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.784919   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:15:19.780608   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.781186   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.782870   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.783356   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.784919   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:15:19.788631  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:15:19.788647  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:15:19.847876  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:15:19.847900  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:15:22.381584  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:22.392889  167468 kubeadm.go:601] duration metric: took 4m4.348960089s to restartPrimaryControlPlane
	W1009 19:15:22.392982  167468 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 19:15:22.393529  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:15:22.850885  167468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:15:22.864335  167468 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:15:22.873145  167468 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:15:22.873189  167468 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:15:22.881423  167468 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:15:22.881441  167468 kubeadm.go:157] found existing configuration files:
	
	I1009 19:15:22.881497  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 19:15:22.889858  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:15:22.889971  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:15:22.897974  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 19:15:22.906291  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:15:22.906340  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:15:22.914415  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 19:15:22.922536  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:15:22.922599  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:15:22.931121  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 19:15:22.939993  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:15:22.940039  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:15:22.948051  167468 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:15:22.986697  167468 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:15:22.986748  167468 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:15:23.008875  167468 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:15:23.008934  167468 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:15:23.008988  167468 kubeadm.go:318] OS: Linux
	I1009 19:15:23.009036  167468 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:15:23.009103  167468 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:15:23.009177  167468 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:15:23.009236  167468 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:15:23.009299  167468 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:15:23.009395  167468 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:15:23.009455  167468 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:15:23.009494  167468 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:15:23.074858  167468 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:15:23.074976  167468 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:15:23.075090  167468 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:15:23.082442  167468 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:15:23.086775  167468 out.go:252]   - Generating certificates and keys ...
	I1009 19:15:23.086906  167468 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:15:23.086998  167468 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:15:23.087108  167468 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:15:23.087219  167468 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:15:23.087316  167468 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:15:23.087390  167468 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:15:23.087481  167468 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:15:23.087562  167468 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:15:23.087646  167468 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:15:23.087719  167468 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:15:23.087760  167468 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:15:23.087822  167468 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:15:23.221125  167468 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:15:23.460801  167468 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:15:23.654451  167468 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:15:24.356245  167468 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:15:24.473269  167468 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:15:24.473898  167468 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:15:24.476149  167468 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:15:24.477738  167468 out.go:252]   - Booting up control plane ...
	I1009 19:15:24.477865  167468 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:15:24.477931  167468 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:15:24.478446  167468 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:15:24.492764  167468 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:15:24.492874  167468 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:15:24.499467  167468 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:15:24.499575  167468 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:15:24.499618  167468 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:15:24.605084  167468 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:15:24.605222  167468 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:15:25.606067  167468 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001072895s
	I1009 19:15:25.610397  167468 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:15:25.610526  167468 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 19:15:25.610654  167468 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:15:25.610769  167468 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:19:25.611835  167468 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000416121s
	I1009 19:19:25.611992  167468 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000591031s
	I1009 19:19:25.612097  167468 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000888179s
	I1009 19:19:25.612103  167468 kubeadm.go:318] 
	I1009 19:19:25.612253  167468 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:19:25.612445  167468 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:19:25.612656  167468 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:19:25.612825  167468 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:19:25.612930  167468 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:19:25.613139  167468 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:19:25.613162  167468 kubeadm.go:318] 
	I1009 19:19:25.616947  167468 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:19:25.617060  167468 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:19:25.617572  167468 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 19:19:25.617651  167468 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:19:25.617804  167468 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001072895s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000416121s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000591031s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000888179s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:19:25.617887  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:19:26.066027  167468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:19:26.078995  167468 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:19:26.079043  167468 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:19:26.087404  167468 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:19:26.087421  167468 kubeadm.go:157] found existing configuration files:
	
	I1009 19:19:26.087474  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 19:19:26.095518  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:19:26.095582  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:19:26.103154  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 19:19:26.111105  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:19:26.111146  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:19:26.119058  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 19:19:26.127484  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:19:26.127537  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:19:26.135357  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 19:19:26.143254  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:19:26.143297  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:19:26.151189  167468 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:19:26.210779  167468 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:19:26.274405  167468 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:23:28.750127  167468 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 19:23:28.750319  167468 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:23:28.753500  167468 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:23:28.753545  167468 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:23:28.753617  167468 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:23:28.753661  167468 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:23:28.753718  167468 kubeadm.go:318] OS: Linux
	I1009 19:23:28.753755  167468 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:23:28.753798  167468 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:23:28.753837  167468 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:23:28.753879  167468 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:23:28.753920  167468 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:23:28.753966  167468 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:23:28.754009  167468 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:23:28.754044  167468 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:23:28.754106  167468 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:23:28.754188  167468 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:23:28.754294  167468 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:23:28.754356  167468 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:23:28.761169  167468 out.go:252]   - Generating certificates and keys ...
	I1009 19:23:28.761262  167468 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:23:28.761315  167468 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:23:28.761440  167468 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:23:28.761501  167468 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:23:28.761579  167468 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:23:28.761622  167468 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:23:28.761682  167468 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:23:28.761749  167468 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:23:28.761806  167468 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:23:28.761871  167468 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:23:28.761900  167468 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:23:28.761950  167468 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:23:28.761989  167468 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:23:28.762031  167468 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:23:28.762071  167468 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:23:28.762123  167468 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:23:28.762165  167468 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:23:28.762242  167468 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:23:28.762313  167468 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:23:28.766946  167468 out.go:252]   - Booting up control plane ...
	I1009 19:23:28.767031  167468 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:23:28.767110  167468 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:23:28.767177  167468 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:23:28.767279  167468 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:23:28.767361  167468 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:23:28.767493  167468 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:23:28.767564  167468 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:23:28.767596  167468 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:23:28.767740  167468 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:23:28.767825  167468 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:23:28.767878  167468 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001082703s
	I1009 19:23:28.767963  167468 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:23:28.768033  167468 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 19:23:28.768102  167468 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:23:28.768166  167468 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:23:28.768228  167468 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000814983s
	I1009 19:23:28.768298  167468 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001138726s
	I1009 19:23:28.768353  167468 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001237075s
	I1009 19:23:28.768355  167468 kubeadm.go:318] 
	I1009 19:23:28.768454  167468 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:23:28.768516  167468 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:23:28.768593  167468 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:23:28.768716  167468 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:23:28.768790  167468 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:23:28.768868  167468 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:23:28.768903  167468 kubeadm.go:318] 
	I1009 19:23:28.768957  167468 kubeadm.go:402] duration metric: took 12m10.761538861s to StartCluster
	I1009 19:23:28.769014  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:23:28.769073  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:23:28.798618  167468 cri.go:89] found id: ""
	I1009 19:23:28.798638  167468 logs.go:282] 0 containers: []
	W1009 19:23:28.798647  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:23:28.798655  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:23:28.798723  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:23:28.826917  167468 cri.go:89] found id: ""
	I1009 19:23:28.826933  167468 logs.go:282] 0 containers: []
	W1009 19:23:28.826940  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:23:28.826945  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:23:28.827008  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:23:28.855079  167468 cri.go:89] found id: ""
	I1009 19:23:28.855097  167468 logs.go:282] 0 containers: []
	W1009 19:23:28.855103  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:23:28.855108  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:23:28.855157  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:23:28.884473  167468 cri.go:89] found id: ""
	I1009 19:23:28.884493  167468 logs.go:282] 0 containers: []
	W1009 19:23:28.884503  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:23:28.884509  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:23:28.884563  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:23:28.911619  167468 cri.go:89] found id: ""
	I1009 19:23:28.911637  167468 logs.go:282] 0 containers: []
	W1009 19:23:28.911646  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:23:28.911653  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:23:28.911729  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:23:28.940299  167468 cri.go:89] found id: ""
	I1009 19:23:28.940316  167468 logs.go:282] 0 containers: []
	W1009 19:23:28.940325  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:23:28.940332  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:23:28.940417  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:23:28.967431  167468 cri.go:89] found id: ""
	I1009 19:23:28.967448  167468 logs.go:282] 0 containers: []
	W1009 19:23:28.967455  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:23:28.967464  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:23:28.967475  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:23:29.033707  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:23:29.033734  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:23:29.046262  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:23:29.046281  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:23:29.107779  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:23:29.100355   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.100974   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.102094   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.102502   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.104088   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:23:29.100355   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.100974   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.102094   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.102502   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.104088   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:23:29.107791  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:23:29.107803  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:23:29.172081  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:23:29.172106  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:23:29.202987  167468 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001082703s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000814983s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001138726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001237075s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:23:29.203031  167468 out.go:285] * 
	W1009 19:23:29.203144  167468 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001082703s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000814983s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001138726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001237075s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:23:29.203160  167468 out.go:285] * 
	W1009 19:23:29.204930  167468 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:23:29.208458  167468 out.go:203] 
	W1009 19:23:29.209891  167468 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001082703s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000814983s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001138726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001237075s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:23:29.209916  167468 out.go:285] * 
	I1009 19:23:29.211473  167468 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:23:21 functional-158523 crio[5871]: time="2025-10-09T19:23:21.383890963Z" level=info msg="createCtr: removing container 52759f352f3bc676ab5b49a07a9d85f567d2e7279dd6e66b537befb9c34b9563" id=6bc74866-bd1a-4fe3-b2fe-ab4f48ef66c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:21 functional-158523 crio[5871]: time="2025-10-09T19:23:21.383925125Z" level=info msg="createCtr: deleting container 52759f352f3bc676ab5b49a07a9d85f567d2e7279dd6e66b537befb9c34b9563 from storage" id=6bc74866-bd1a-4fe3-b2fe-ab4f48ef66c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:21 functional-158523 crio[5871]: time="2025-10-09T19:23:21.38602149Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-158523_kube-system_8f4f9df5924bbaa4e1ec7f60e6576647_0" id=6bc74866-bd1a-4fe3-b2fe-ab4f48ef66c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.361369439Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=564b8cc3-706f-4ebc-85fc-e418d1c3752d name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.362346701Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=4d15c296-f8d7-4176-9190-700d112b9572 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.363786744Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-158523/kube-controller-manager" id=9e8f884d-15aa-4c75-8b1a-d77921fb52c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.364209596Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.368303863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.368759728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.385563675Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9e8f884d-15aa-4c75-8b1a-d77921fb52c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.387017596Z" level=info msg="createCtr: deleting container ID 7903786c584f6892e4b56affb9c65eed6407c04a3870e7970134bf671afc0f1d from idIndex" id=9e8f884d-15aa-4c75-8b1a-d77921fb52c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.387061157Z" level=info msg="createCtr: removing container 7903786c584f6892e4b56affb9c65eed6407c04a3870e7970134bf671afc0f1d" id=9e8f884d-15aa-4c75-8b1a-d77921fb52c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.387095444Z" level=info msg="createCtr: deleting container 7903786c584f6892e4b56affb9c65eed6407c04a3870e7970134bf671afc0f1d from storage" id=9e8f884d-15aa-4c75-8b1a-d77921fb52c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.389220675Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-158523_kube-system_08a2248bbc397b9ed927890e2073120b_0" id=9e8f884d-15aa-4c75-8b1a-d77921fb52c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.361700431Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=3035940c-3eb2-4f17-9268-cf6479d33a9c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.3626609Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=cb82b414-d303-41e2-99d2-2720900c87b1 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.363666995Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-158523/kube-scheduler" id=f421fdd5-7a90-465c-a8ed-f9d66ced2939 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.363912721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.367677125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.368160014Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.385420562Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f421fdd5-7a90-465c-a8ed-f9d66ced2939 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.386768506Z" level=info msg="createCtr: deleting container ID 7ca39f0bfdca7c6677a8404742b48165f0f4969589d4ccb2467e982a6dd7797a from idIndex" id=f421fdd5-7a90-465c-a8ed-f9d66ced2939 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.386809948Z" level=info msg="createCtr: removing container 7ca39f0bfdca7c6677a8404742b48165f0f4969589d4ccb2467e982a6dd7797a" id=f421fdd5-7a90-465c-a8ed-f9d66ced2939 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.386848933Z" level=info msg="createCtr: deleting container 7ca39f0bfdca7c6677a8404742b48165f0f4969589d4ccb2467e982a6dd7797a from storage" id=f421fdd5-7a90-465c-a8ed-f9d66ced2939 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.38924825Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-158523_kube-system_589c70f36d169281ef056387fc3a74a2_0" id=f421fdd5-7a90-465c-a8ed-f9d66ced2939 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:23:30.423216   15752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:30.423956   15752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:30.425241   15752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:30.425663   15752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:30.427183   15752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:23:30 up  1:05,  0 user,  load average: 0.06, 0.08, 4.21
	Linux functional-158523 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:23:21 functional-158523 kubelet[14998]:         container etcd start failed in pod etcd-functional-158523_kube-system(8f4f9df5924bbaa4e1ec7f60e6576647): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:21 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:21 functional-158523 kubelet[14998]: E1009 19:23:21.386564   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-158523" podUID="8f4f9df5924bbaa4e1ec7f60e6576647"
	Oct 09 19:23:22 functional-158523 kubelet[14998]: E1009 19:23:22.189594   14998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-158523.186ce8d7e4fa8e80  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-158523,UID:functional-158523,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-158523 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-158523,},FirstTimestamp:2025-10-09 19:19:28.352259712 +0000 UTC m=+0.607993345,LastTimestamp:2025-10-09 19:19:28.352259712 +0000 UTC m=+0.607993345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-158523,}"
	Oct 09 19:23:22 functional-158523 kubelet[14998]: E1009 19:23:22.360879   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:23:22 functional-158523 kubelet[14998]: E1009 19:23:22.389641   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:22 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:22 functional-158523 kubelet[14998]:  > podSandboxID="c46b8882958a3d5604399e1a44a408e9b7fbd2d13564b122e7c9bc822d9ccdf7"
	Oct 09 19:23:22 functional-158523 kubelet[14998]: E1009 19:23:22.389750   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:22 functional-158523 kubelet[14998]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-158523_kube-system(08a2248bbc397b9ed927890e2073120b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:22 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:22 functional-158523 kubelet[14998]: E1009 19:23:22.389780   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-158523" podUID="08a2248bbc397b9ed927890e2073120b"
	Oct 09 19:23:24 functional-158523 kubelet[14998]: E1009 19:23:24.984891   14998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-158523?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 19:23:25 functional-158523 kubelet[14998]: I1009 19:23:25.146305   14998 kubelet_node_status.go:75] "Attempting to register node" node="functional-158523"
	Oct 09 19:23:25 functional-158523 kubelet[14998]: E1009 19:23:25.146731   14998 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-158523"
	Oct 09 19:23:26 functional-158523 kubelet[14998]: E1009 19:23:26.361185   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:23:26 functional-158523 kubelet[14998]: E1009 19:23:26.389658   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:26 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:26 functional-158523 kubelet[14998]:  > podSandboxID="ec5fd20197d3cb2af48faa87c42dae73063f326b50e117bd23262f4dc00885b3"
	Oct 09 19:23:26 functional-158523 kubelet[14998]: E1009 19:23:26.389797   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:26 functional-158523 kubelet[14998]:         container kube-scheduler start failed in pod kube-scheduler-functional-158523_kube-system(589c70f36d169281ef056387fc3a74a2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:26 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:26 functional-158523 kubelet[14998]: E1009 19:23:26.389838   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-158523" podUID="589c70f36d169281ef056387fc3a74a2"
	Oct 09 19:23:28 functional-158523 kubelet[14998]: E1009 19:23:28.373929   14998 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-158523\" not found"
	Oct 09 19:23:29 functional-158523 kubelet[14998]: E1009 19:23:29.511170   14998 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523: exit status 2 (311.75584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-158523" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (737.05s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (2s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-158523 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-158523 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (52.293995ms)

                                                
                                                
** stderr ** 
	E1009 19:23:31.211795  181130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:31.212205  181130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:31.213688  181130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:31.213997  181130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:31.215297  181130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-158523 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-158523
helpers_test.go:243: (dbg) docker inspect functional-158523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	        "Created": "2025-10-09T18:56:39.519997973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:56:39.555178961Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hosts",
	        "LogPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4-json.log",
	        "Name": "/functional-158523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-158523:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-158523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	                "LowerDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-158523",
	                "Source": "/var/lib/docker/volumes/functional-158523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-158523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-158523",
	                "name.minikube.sigs.k8s.io": "functional-158523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50f61b8c99f766763e9e30d37584e537ddf10f74215a382a4221cbe7f7c2e821",
	            "SandboxKey": "/var/run/docker/netns/50f61b8c99f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-158523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:a1:34:24:78:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6347eb7b6b2842e266f51d3873c8aa169a4287ea52510c3d62dc4ed41012963c",
	                    "EndpointID": "eb1ada23cbc058d52037dd94574627dd1b0fedf455f291e656f050bf06881952",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-158523",
	                        "dea3735c567c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523: exit status 2 (298.442379ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 logs -n 25
helpers_test.go:260: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                            │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                            │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ unpause │ nospam-656427 --log_dir /tmp/nospam-656427 unpause                                                            │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                               │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                               │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ nospam-656427 --log_dir /tmp/nospam-656427 stop                                                               │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ delete  │ -p nospam-656427                                                                                              │ nospam-656427     │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p functional-158523 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ -p functional-158523 --alsologtostderr -v=8                                                                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ cache   │ functional-158523 cache add registry.k8s.io/pause:3.1                                                         │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache add registry.k8s.io/pause:3.3                                                         │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache add registry.k8s.io/pause:latest                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache add minikube-local-cache-test:functional-158523                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ functional-158523 cache delete minikube-local-cache-test:functional-158523                                    │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl images                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	│ cache   │ functional-158523 cache reload                                                                                │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-158523 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ kubectl │ functional-158523 kubectl -- --context functional-158523 get pods                                             │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	│ start   │ -p functional-158523 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:11:14
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:11:14.157038  167468 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:11:14.157144  167468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:11:14.157147  167468 out.go:374] Setting ErrFile to fd 2...
	I1009 19:11:14.157150  167468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:11:14.157397  167468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:11:14.157856  167468 out.go:368] Setting JSON to false
	I1009 19:11:14.158722  167468 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3223,"bootTime":1760033851,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:11:14.158807  167468 start.go:143] virtualization: kvm guest
	I1009 19:11:14.160952  167468 out.go:179] * [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:11:14.162586  167468 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:11:14.162634  167468 notify.go:221] Checking for updates...
	I1009 19:11:14.165525  167468 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:11:14.166942  167468 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:11:14.170608  167468 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:11:14.171837  167468 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:11:14.173196  167468 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:11:14.175072  167468 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:11:14.175208  167468 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:11:14.203136  167468 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:11:14.203286  167468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:11:14.264483  167468 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-09 19:11:14.254475753 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:11:14.264582  167468 docker.go:319] overlay module found
	I1009 19:11:14.266408  167468 out.go:179] * Using the docker driver based on existing profile
	I1009 19:11:14.267558  167468 start.go:309] selected driver: docker
	I1009 19:11:14.267564  167468 start.go:930] validating driver "docker" against &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:11:14.267655  167468 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:11:14.267744  167468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:11:14.329654  167468 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-09 19:11:14.319992483 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:11:14.330205  167468 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:11:14.330223  167468 cni.go:84] Creating CNI manager for ""
	I1009 19:11:14.330253  167468 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:11:14.330287  167468 start.go:353] cluster config:
	{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:11:14.332505  167468 out.go:179] * Starting "functional-158523" primary control-plane node in "functional-158523" cluster
	I1009 19:11:14.334058  167468 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:11:14.335345  167468 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:11:14.336493  167468 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:11:14.336527  167468 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:11:14.336536  167468 cache.go:58] Caching tarball of preloaded images
	I1009 19:11:14.336602  167468 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:11:14.336625  167468 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:11:14.336631  167468 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:11:14.336732  167468 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/config.json ...
	I1009 19:11:14.356941  167468 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:11:14.356956  167468 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:11:14.356970  167468 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:11:14.356995  167468 start.go:361] acquireMachinesLock for functional-158523: {Name:mk995713bbd40419f859c4a8640c8ada0479020c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:11:14.357048  167468 start.go:365] duration metric: took 38.867µs to acquireMachinesLock for "functional-158523"
	I1009 19:11:14.357061  167468 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:11:14.357066  167468 fix.go:55] fixHost starting: 
	I1009 19:11:14.357257  167468 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:11:14.373853  167468 fix.go:113] recreateIfNeeded on functional-158523: state=Running err=<nil>
	W1009 19:11:14.373882  167468 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:11:14.375583  167468 out.go:252] * Updating the running docker "functional-158523" container ...
	I1009 19:11:14.375606  167468 machine.go:93] provisionDockerMachine start ...
	I1009 19:11:14.375672  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:14.393133  167468 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:14.393345  167468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:11:14.393352  167468 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:11:14.538696  167468 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 19:11:14.538716  167468 ubuntu.go:182] provisioning hostname "functional-158523"
	I1009 19:11:14.538785  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:14.557084  167468 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:14.557356  167468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:11:14.557367  167468 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-158523 && echo "functional-158523" | sudo tee /etc/hostname
	I1009 19:11:14.713522  167468 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-158523
	
	I1009 19:11:14.713596  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:14.731559  167468 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:14.731842  167468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:11:14.731856  167468 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-158523' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-158523/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-158523' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:11:14.877193  167468 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:14.877220  167468 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:11:14.877247  167468 ubuntu.go:190] setting up certificates
	I1009 19:11:14.877258  167468 provision.go:84] configureAuth start
	I1009 19:11:14.877334  167468 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 19:11:14.894643  167468 provision.go:143] copyHostCerts
	I1009 19:11:14.894694  167468 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:11:14.894709  167468 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:11:14.894773  167468 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:11:14.894862  167468 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:11:14.894865  167468 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:11:14.894889  167468 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:11:14.894937  167468 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:11:14.894940  167468 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:11:14.894959  167468 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:11:14.895003  167468 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.functional-158523 san=[127.0.0.1 192.168.49.2 functional-158523 localhost minikube]
	I1009 19:11:15.233918  167468 provision.go:177] copyRemoteCerts
	I1009 19:11:15.233967  167468 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:11:15.234007  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:15.251853  167468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:11:15.355329  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:11:15.374955  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:11:15.393475  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:11:15.412247  167468 provision.go:87] duration metric: took 534.974389ms to configureAuth
	I1009 19:11:15.412267  167468 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:11:15.412477  167468 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:11:15.412594  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:15.430627  167468 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:15.430837  167468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 19:11:15.430849  167468 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:11:15.707832  167468 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:11:15.707848  167468 machine.go:96] duration metric: took 1.33223564s to provisionDockerMachine
	I1009 19:11:15.707858  167468 start.go:294] postStartSetup for "functional-158523" (driver="docker")
	I1009 19:11:15.707868  167468 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:11:15.707919  167468 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:11:15.707980  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:15.725705  167468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:11:15.827905  167468 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:11:15.831650  167468 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:11:15.831668  167468 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:11:15.831679  167468 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:11:15.831740  167468 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:11:15.831815  167468 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:11:15.831878  167468 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts -> hosts in /etc/test/nested/copy/141519
	I1009 19:11:15.831909  167468 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/141519
	I1009 19:11:15.839531  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:11:15.857737  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts --> /etc/test/nested/copy/141519/hosts (40 bytes)
	I1009 19:11:15.875073  167468 start.go:297] duration metric: took 167.196866ms for postStartSetup
	I1009 19:11:15.875151  167468 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:11:15.875185  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:15.893217  167468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:11:15.993724  167468 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:11:15.998524  167468 fix.go:57] duration metric: took 1.641448896s for fixHost
	I1009 19:11:15.998548  167468 start.go:84] releasing machines lock for "functional-158523", held for 1.641493243s
	I1009 19:11:15.998615  167468 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-158523
	I1009 19:11:16.017075  167468 ssh_runner.go:195] Run: cat /version.json
	I1009 19:11:16.017091  167468 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:11:16.017114  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:16.017144  167468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:11:16.036046  167468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:11:16.036330  167468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:11:16.188713  167468 ssh_runner.go:195] Run: systemctl --version
	I1009 19:11:16.196168  167468 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:11:16.231948  167468 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:11:16.236768  167468 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:11:16.236819  167468 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:11:16.245113  167468 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:11:16.245131  167468 start.go:496] detecting cgroup driver to use...
	I1009 19:11:16.245167  167468 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:11:16.245211  167468 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:11:16.259663  167468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:11:16.272373  167468 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:11:16.272435  167468 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:11:16.287252  167468 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:11:16.299952  167468 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:11:16.392105  167468 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:11:16.479823  167468 docker.go:234] disabling docker service ...
	I1009 19:11:16.479877  167468 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:11:16.494456  167468 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:11:16.507602  167468 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:11:16.592867  167468 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:11:16.683605  167468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:11:16.710180  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:11:16.725165  167468 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:11:16.725208  167468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:16.734043  167468 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:11:16.734092  167468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:16.743004  167468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:16.751778  167468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:16.760817  167468 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:11:16.768978  167468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:16.778147  167468 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:16.786486  167468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:16.795315  167468 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:11:16.802691  167468 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:11:16.809903  167468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:16.905667  167468 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:11:17.020220  167468 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:11:17.020286  167468 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:11:17.024261  167468 start.go:564] Will wait 60s for crictl version
	I1009 19:11:17.024305  167468 ssh_runner.go:195] Run: which crictl
	I1009 19:11:17.027760  167468 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:11:17.051881  167468 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:11:17.051942  167468 ssh_runner.go:195] Run: crio --version
	I1009 19:11:17.080716  167468 ssh_runner.go:195] Run: crio --version
	I1009 19:11:17.111432  167468 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:11:17.112945  167468 cli_runner.go:164] Run: docker network inspect functional-158523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:11:17.130349  167468 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:11:17.136436  167468 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1009 19:11:17.137696  167468 kubeadm.go:883] updating cluster {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:11:17.137806  167468 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:11:17.137860  167468 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:11:17.174863  167468 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:11:17.174875  167468 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:11:17.174927  167468 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:11:17.201355  167468 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:11:17.201367  167468 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:11:17.201372  167468 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 19:11:17.201491  167468 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-158523 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:11:17.201558  167468 ssh_runner.go:195] Run: crio config
	I1009 19:11:17.248070  167468 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1009 19:11:17.248092  167468 cni.go:84] Creating CNI manager for ""
	I1009 19:11:17.248099  167468 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:11:17.248108  167468 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:11:17.248129  167468 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-158523 NodeName:functional-158523 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:11:17.248244  167468 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-158523"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:11:17.248301  167468 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:11:17.256659  167468 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:11:17.256725  167468 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:11:17.265104  167468 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 19:11:17.278149  167468 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:11:17.291161  167468 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1009 19:11:17.304170  167468 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:11:17.308091  167468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:17.393652  167468 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:11:17.406930  167468 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523 for IP: 192.168.49.2
	I1009 19:11:17.406944  167468 certs.go:195] generating shared ca certs ...
	I1009 19:11:17.406959  167468 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:17.407115  167468 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:11:17.407147  167468 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:11:17.407152  167468 certs.go:257] generating profile certs ...
	I1009 19:11:17.407227  167468 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.key
	I1009 19:11:17.407261  167468 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key.1809350a
	I1009 19:11:17.407289  167468 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key
	I1009 19:11:17.407430  167468 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:11:17.407466  167468 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:11:17.407475  167468 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:11:17.407500  167468 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:11:17.407523  167468 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:11:17.407548  167468 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:11:17.407584  167468 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:11:17.408210  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:11:17.427246  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:11:17.445339  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:11:17.462828  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:11:17.480653  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:11:17.499524  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:11:17.518652  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:11:17.536330  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:11:17.554544  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:11:17.572216  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:11:17.589806  167468 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:11:17.607162  167468 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:11:17.619605  167468 ssh_runner.go:195] Run: openssl version
	I1009 19:11:17.625893  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:11:17.634967  167468 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:11:17.638971  167468 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:11:17.639017  167468 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:11:17.673097  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:11:17.681781  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:11:17.690510  167468 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:17.694244  167468 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:17.694287  167468 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:17.728858  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:11:17.737406  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:11:17.746208  167468 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:11:17.749994  167468 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:11:17.750054  167468 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:11:17.784891  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:11:17.793493  167468 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:11:17.797539  167468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:11:17.833179  167468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:11:17.867879  167468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:11:17.902538  167468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:11:17.937115  167468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:11:17.972083  167468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:11:18.007424  167468 kubeadm.go:400] StartCluster: {Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:11:18.007509  167468 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:11:18.007561  167468 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:11:18.035547  167468 cri.go:89] found id: ""
	I1009 19:11:18.035607  167468 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:11:18.043904  167468 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:11:18.043917  167468 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:11:18.043958  167468 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:11:18.051515  167468 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:11:18.052124  167468 kubeconfig.go:125] found "functional-158523" server: "https://192.168.49.2:8441"
	I1009 19:11:18.053652  167468 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:11:18.061973  167468 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-09 18:56:43.847270831 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-09 19:11:17.301680145 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1009 19:11:18.061997  167468 kubeadm.go:1160] stopping kube-system containers ...
	I1009 19:11:18.062011  167468 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 19:11:18.062062  167468 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:11:18.090233  167468 cri.go:89] found id: ""
	I1009 19:11:18.090298  167468 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 19:11:18.135227  167468 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:11:18.143667  167468 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5623 Oct  9 19:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  9 19:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  9 19:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  9 19:00 /etc/kubernetes/scheduler.conf
	
	I1009 19:11:18.143727  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 19:11:18.151903  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 19:11:18.160031  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:11:18.160092  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:11:18.167823  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 19:11:18.175748  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:11:18.175802  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:11:18.184016  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 19:11:18.192107  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:11:18.192164  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:11:18.199911  167468 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:11:18.208125  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:11:18.251392  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:11:19.844491  167468 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.593070913s)
	I1009 19:11:19.844554  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:11:20.007259  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:11:20.056142  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:11:20.106149  167468 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:11:20.106217  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:20.607128  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:21.106506  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:21.607044  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:22.106495  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:22.607290  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:23.107176  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:23.606512  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:24.106477  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:24.607120  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:25.106702  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:25.606496  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:26.107306  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:26.606426  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:27.107156  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:27.606967  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:28.106986  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:28.607360  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:29.106501  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:29.606699  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:30.106988  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:30.606751  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:31.106573  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:31.607271  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:32.107154  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:32.606611  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:33.107242  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:33.607016  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:34.106535  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:34.606754  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:35.107301  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:35.607266  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:36.106318  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:36.606315  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:37.107176  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:37.607281  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:38.106732  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:38.607122  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:39.106818  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:39.606784  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:40.107197  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:40.606991  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:41.107011  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:41.606339  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:42.106963  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:42.606555  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:43.107219  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:43.607105  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:44.106424  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:44.607215  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:45.106602  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:45.607006  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:46.106815  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:46.607280  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:47.106629  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:47.606477  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:48.107415  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:48.607339  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:49.106605  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:49.606757  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:50.106615  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:50.606311  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:51.106589  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:51.606462  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:52.106410  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:52.606644  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:53.106820  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:53.606821  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:54.107031  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:54.607139  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:55.106783  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:55.606601  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:56.107299  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:56.606277  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:57.107229  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:57.606479  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:58.106431  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:58.607303  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:59.107050  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:59.607125  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:00.106731  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:00.606499  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:01.107084  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:01.606814  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:02.106487  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:02.607319  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:03.106362  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:03.606446  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:04.106944  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:04.606981  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:05.106694  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:05.607165  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:06.107147  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:06.607010  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:07.106545  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:07.606527  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:08.106534  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:08.606518  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:09.106332  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:09.607203  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:10.106316  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:10.607212  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:11.107324  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:11.606853  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:12.106689  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:12.607269  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:13.107123  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:13.607171  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:14.107276  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:14.607287  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:15.106491  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:15.606605  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:16.106363  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:16.607071  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:17.106663  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:17.607071  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:18.106932  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:18.607123  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:19.106860  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:19.606746  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:20.107336  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:20.107457  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:20.136348  167468 cri.go:89] found id: ""
	I1009 19:12:20.136367  167468 logs.go:282] 0 containers: []
	W1009 19:12:20.136387  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:20.136398  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:20.136460  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:20.164454  167468 cri.go:89] found id: ""
	I1009 19:12:20.164472  167468 logs.go:282] 0 containers: []
	W1009 19:12:20.164480  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:20.164495  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:20.164552  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:20.191751  167468 cri.go:89] found id: ""
	I1009 19:12:20.191768  167468 logs.go:282] 0 containers: []
	W1009 19:12:20.191775  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:20.191780  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:20.191832  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:20.220093  167468 cri.go:89] found id: ""
	I1009 19:12:20.220110  167468 logs.go:282] 0 containers: []
	W1009 19:12:20.220117  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:20.220122  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:20.220167  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:20.247873  167468 cri.go:89] found id: ""
	I1009 19:12:20.247891  167468 logs.go:282] 0 containers: []
	W1009 19:12:20.247898  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:20.247903  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:20.247956  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:20.276291  167468 cri.go:89] found id: ""
	I1009 19:12:20.276308  167468 logs.go:282] 0 containers: []
	W1009 19:12:20.276315  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:20.276320  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:20.276367  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:20.303968  167468 cri.go:89] found id: ""
	I1009 19:12:20.303987  167468 logs.go:282] 0 containers: []
	W1009 19:12:20.303997  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:20.304008  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:20.304021  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:20.364492  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:20.356948    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.357523    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.359155    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.359653    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.360925    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:20.356948    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.357523    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.359155    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.359653    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:20.360925    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:20.364503  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:20.364517  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:20.425746  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:20.425770  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:20.456006  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:20.456025  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:20.527929  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:20.527953  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:23.042459  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:23.053621  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:23.053687  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:23.081180  167468 cri.go:89] found id: ""
	I1009 19:12:23.081199  167468 logs.go:282] 0 containers: []
	W1009 19:12:23.081209  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:23.081217  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:23.081270  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:23.110039  167468 cri.go:89] found id: ""
	I1009 19:12:23.110059  167468 logs.go:282] 0 containers: []
	W1009 19:12:23.110068  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:23.110076  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:23.110137  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:23.138162  167468 cri.go:89] found id: ""
	I1009 19:12:23.138179  167468 logs.go:282] 0 containers: []
	W1009 19:12:23.138185  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:23.138190  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:23.138239  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:23.164707  167468 cri.go:89] found id: ""
	I1009 19:12:23.164724  167468 logs.go:282] 0 containers: []
	W1009 19:12:23.164731  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:23.164736  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:23.164789  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:23.192945  167468 cri.go:89] found id: ""
	I1009 19:12:23.192961  167468 logs.go:282] 0 containers: []
	W1009 19:12:23.192968  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:23.192973  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:23.193032  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:23.220315  167468 cri.go:89] found id: ""
	I1009 19:12:23.220332  167468 logs.go:282] 0 containers: []
	W1009 19:12:23.220339  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:23.220344  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:23.220426  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:23.247691  167468 cri.go:89] found id: ""
	I1009 19:12:23.247708  167468 logs.go:282] 0 containers: []
	W1009 19:12:23.247716  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:23.247727  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:23.247740  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:23.312625  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:23.312649  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:23.345619  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:23.345635  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:23.414184  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:23.414206  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:23.426948  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:23.426967  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:23.487448  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:23.479417    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.480019    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.481685    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.482253    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.483804    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:23.479417    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.480019    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.481685    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.482253    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:23.483804    6879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:25.989194  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:26.000187  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:26.000258  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:26.026910  167468 cri.go:89] found id: ""
	I1009 19:12:26.026929  167468 logs.go:282] 0 containers: []
	W1009 19:12:26.026936  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:26.026942  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:26.026993  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:26.054273  167468 cri.go:89] found id: ""
	I1009 19:12:26.054290  167468 logs.go:282] 0 containers: []
	W1009 19:12:26.054296  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:26.054303  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:26.054347  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:26.082937  167468 cri.go:89] found id: ""
	I1009 19:12:26.082953  167468 logs.go:282] 0 containers: []
	W1009 19:12:26.082960  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:26.082965  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:26.083013  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:26.111657  167468 cri.go:89] found id: ""
	I1009 19:12:26.111674  167468 logs.go:282] 0 containers: []
	W1009 19:12:26.111681  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:26.111686  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:26.111744  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:26.138168  167468 cri.go:89] found id: ""
	I1009 19:12:26.138183  167468 logs.go:282] 0 containers: []
	W1009 19:12:26.138190  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:26.138212  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:26.138261  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:26.165234  167468 cri.go:89] found id: ""
	I1009 19:12:26.165258  167468 logs.go:282] 0 containers: []
	W1009 19:12:26.165267  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:26.165274  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:26.165340  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:26.193467  167468 cri.go:89] found id: ""
	I1009 19:12:26.193486  167468 logs.go:282] 0 containers: []
	W1009 19:12:26.193493  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:26.193503  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:26.193520  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:26.252945  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:26.245540    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.246126    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.247768    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.248210    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.249337    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:26.245540    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.246126    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.247768    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.248210    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:26.249337    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:26.252967  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:26.252981  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:26.318494  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:26.318518  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:26.349406  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:26.349428  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:26.417386  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:26.417411  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:28.930653  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:28.942481  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:28.942531  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:28.970321  167468 cri.go:89] found id: ""
	I1009 19:12:28.970338  167468 logs.go:282] 0 containers: []
	W1009 19:12:28.970344  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:28.970349  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:28.970413  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:28.996510  167468 cri.go:89] found id: ""
	I1009 19:12:28.996530  167468 logs.go:282] 0 containers: []
	W1009 19:12:28.996539  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:28.996545  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:28.996600  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:29.023259  167468 cri.go:89] found id: ""
	I1009 19:12:29.023277  167468 logs.go:282] 0 containers: []
	W1009 19:12:29.023285  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:29.023292  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:29.023344  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:29.050560  167468 cri.go:89] found id: ""
	I1009 19:12:29.050575  167468 logs.go:282] 0 containers: []
	W1009 19:12:29.050581  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:29.050585  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:29.050640  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:29.078006  167468 cri.go:89] found id: ""
	I1009 19:12:29.078024  167468 logs.go:282] 0 containers: []
	W1009 19:12:29.078031  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:29.078036  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:29.078091  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:29.105506  167468 cri.go:89] found id: ""
	I1009 19:12:29.105523  167468 logs.go:282] 0 containers: []
	W1009 19:12:29.105536  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:29.105541  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:29.105588  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:29.133781  167468 cri.go:89] found id: ""
	I1009 19:12:29.133798  167468 logs.go:282] 0 containers: []
	W1009 19:12:29.133804  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:29.133814  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:29.133828  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:29.164882  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:29.164903  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:29.231999  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:29.232023  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:29.244260  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:29.244278  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:29.302021  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:29.294502    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.295049    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.296660    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.297108    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.298657    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:29.294502    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.295049    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.296660    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.297108    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:29.298657    7126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:29.302038  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:29.302057  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:31.867896  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:31.879240  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:31.879294  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:31.905896  167468 cri.go:89] found id: ""
	I1009 19:12:31.905931  167468 logs.go:282] 0 containers: []
	W1009 19:12:31.905941  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:31.905947  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:31.906003  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:31.933637  167468 cri.go:89] found id: ""
	I1009 19:12:31.933653  167468 logs.go:282] 0 containers: []
	W1009 19:12:31.933660  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:31.933670  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:31.933724  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:31.961480  167468 cri.go:89] found id: ""
	I1009 19:12:31.961497  167468 logs.go:282] 0 containers: []
	W1009 19:12:31.961504  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:31.961509  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:31.961566  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:31.988032  167468 cri.go:89] found id: ""
	I1009 19:12:31.988049  167468 logs.go:282] 0 containers: []
	W1009 19:12:31.988056  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:31.988062  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:31.988112  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:32.015108  167468 cri.go:89] found id: ""
	I1009 19:12:32.015124  167468 logs.go:282] 0 containers: []
	W1009 19:12:32.015131  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:32.015136  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:32.015184  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:32.041897  167468 cri.go:89] found id: ""
	I1009 19:12:32.041922  167468 logs.go:282] 0 containers: []
	W1009 19:12:32.041929  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:32.041934  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:32.041979  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:32.068763  167468 cri.go:89] found id: ""
	I1009 19:12:32.068780  167468 logs.go:282] 0 containers: []
	W1009 19:12:32.068788  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:32.068797  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:32.068808  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:32.139869  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:32.139894  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:32.152815  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:32.152832  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:32.210942  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:32.203063    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.203597    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.205243    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.205744    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.207268    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:32.203063    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.203597    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.205243    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.205744    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:32.207268    7237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:32.210963  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:32.210977  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:32.276761  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:32.276783  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:34.810074  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:34.821837  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:34.821902  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:34.849063  167468 cri.go:89] found id: ""
	I1009 19:12:34.849080  167468 logs.go:282] 0 containers: []
	W1009 19:12:34.849089  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:34.849099  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:34.849166  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:34.877410  167468 cri.go:89] found id: ""
	I1009 19:12:34.877428  167468 logs.go:282] 0 containers: []
	W1009 19:12:34.877437  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:34.877443  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:34.877522  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:34.906363  167468 cri.go:89] found id: ""
	I1009 19:12:34.906395  167468 logs.go:282] 0 containers: []
	W1009 19:12:34.906410  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:34.906417  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:34.906466  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:34.935845  167468 cri.go:89] found id: ""
	I1009 19:12:34.935864  167468 logs.go:282] 0 containers: []
	W1009 19:12:34.935872  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:34.935877  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:34.935931  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:34.963735  167468 cri.go:89] found id: ""
	I1009 19:12:34.963755  167468 logs.go:282] 0 containers: []
	W1009 19:12:34.963765  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:34.963771  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:34.963827  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:34.991843  167468 cri.go:89] found id: ""
	I1009 19:12:34.991858  167468 logs.go:282] 0 containers: []
	W1009 19:12:34.991864  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:34.991869  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:34.991916  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:35.018519  167468 cri.go:89] found id: ""
	I1009 19:12:35.018536  167468 logs.go:282] 0 containers: []
	W1009 19:12:35.018544  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:35.018555  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:35.018567  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:35.047474  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:35.047494  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:35.115632  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:35.115655  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:35.128101  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:35.128120  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:35.188265  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:35.180353    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.181068    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.182692    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.183163    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.184740    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:35.180353    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.181068    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.182692    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.183163    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:35.184740    7377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:35.188276  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:35.188286  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:37.755993  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:37.767167  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:37.767221  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:37.794066  167468 cri.go:89] found id: ""
	I1009 19:12:37.794082  167468 logs.go:282] 0 containers: []
	W1009 19:12:37.794089  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:37.794095  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:37.794146  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:37.822922  167468 cri.go:89] found id: ""
	I1009 19:12:37.822938  167468 logs.go:282] 0 containers: []
	W1009 19:12:37.822944  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:37.822949  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:37.823009  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:37.850138  167468 cri.go:89] found id: ""
	I1009 19:12:37.850157  167468 logs.go:282] 0 containers: []
	W1009 19:12:37.850164  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:37.850170  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:37.850221  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:37.878740  167468 cri.go:89] found id: ""
	I1009 19:12:37.878767  167468 logs.go:282] 0 containers: []
	W1009 19:12:37.878774  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:37.878779  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:37.878831  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:37.906691  167468 cri.go:89] found id: ""
	I1009 19:12:37.906709  167468 logs.go:282] 0 containers: []
	W1009 19:12:37.906719  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:37.906725  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:37.906787  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:37.935304  167468 cri.go:89] found id: ""
	I1009 19:12:37.935423  167468 logs.go:282] 0 containers: []
	W1009 19:12:37.935437  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:37.935446  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:37.935516  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:37.962029  167468 cri.go:89] found id: ""
	I1009 19:12:37.962050  167468 logs.go:282] 0 containers: []
	W1009 19:12:37.962060  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:37.962070  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:37.962085  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:38.021180  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:38.013500    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.014003    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.015677    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.016220    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.017804    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:38.013500    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.014003    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.015677    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.016220    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:38.017804    7482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:38.021190  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:38.021201  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:38.087907  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:38.087937  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:38.121749  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:38.121769  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:38.190423  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:38.190452  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:40.704051  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:40.715312  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:40.715363  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:40.742832  167468 cri.go:89] found id: ""
	I1009 19:12:40.742849  167468 logs.go:282] 0 containers: []
	W1009 19:12:40.742858  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:40.742864  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:40.742936  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:40.769708  167468 cri.go:89] found id: ""
	I1009 19:12:40.769729  167468 logs.go:282] 0 containers: []
	W1009 19:12:40.769740  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:40.769746  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:40.769803  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:40.796560  167468 cri.go:89] found id: ""
	I1009 19:12:40.796579  167468 logs.go:282] 0 containers: []
	W1009 19:12:40.796589  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:40.796595  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:40.796660  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:40.823161  167468 cri.go:89] found id: ""
	I1009 19:12:40.823182  167468 logs.go:282] 0 containers: []
	W1009 19:12:40.823189  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:40.823197  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:40.823268  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:40.851120  167468 cri.go:89] found id: ""
	I1009 19:12:40.851138  167468 logs.go:282] 0 containers: []
	W1009 19:12:40.851144  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:40.851149  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:40.851197  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:40.876852  167468 cri.go:89] found id: ""
	I1009 19:12:40.876867  167468 logs.go:282] 0 containers: []
	W1009 19:12:40.876873  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:40.876879  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:40.876927  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:40.904162  167468 cri.go:89] found id: ""
	I1009 19:12:40.904177  167468 logs.go:282] 0 containers: []
	W1009 19:12:40.904184  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:40.904193  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:40.904210  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:40.962776  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:40.955114    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.955608    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.957139    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.957571    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.959161    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:40.955114    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.955608    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.957139    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.957571    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:40.959161    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:40.962793  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:40.962807  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:41.024362  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:41.024397  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:41.054697  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:41.054715  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:41.129584  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:41.129608  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:43.644081  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:43.655800  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:43.655864  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:43.685781  167468 cri.go:89] found id: ""
	I1009 19:12:43.685798  167468 logs.go:282] 0 containers: []
	W1009 19:12:43.685805  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:43.685811  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:43.685857  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:43.713359  167468 cri.go:89] found id: ""
	I1009 19:12:43.713375  167468 logs.go:282] 0 containers: []
	W1009 19:12:43.713396  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:43.713402  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:43.713451  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:43.740718  167468 cri.go:89] found id: ""
	I1009 19:12:43.740736  167468 logs.go:282] 0 containers: []
	W1009 19:12:43.740743  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:43.740750  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:43.740798  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:43.769427  167468 cri.go:89] found id: ""
	I1009 19:12:43.769443  167468 logs.go:282] 0 containers: []
	W1009 19:12:43.769450  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:43.769455  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:43.769517  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:43.797878  167468 cri.go:89] found id: ""
	I1009 19:12:43.797899  167468 logs.go:282] 0 containers: []
	W1009 19:12:43.797907  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:43.797912  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:43.797968  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:43.825547  167468 cri.go:89] found id: ""
	I1009 19:12:43.825564  167468 logs.go:282] 0 containers: []
	W1009 19:12:43.825570  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:43.825576  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:43.825625  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:43.854019  167468 cri.go:89] found id: ""
	I1009 19:12:43.854039  167468 logs.go:282] 0 containers: []
	W1009 19:12:43.854049  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:43.854060  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:43.854074  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:43.884227  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:43.884245  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:43.951690  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:43.951714  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:43.963786  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:43.963804  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:44.021147  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:44.013190    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.013778    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.015326    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.015859    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.017425    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:44.013190    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.013778    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.015326    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.015859    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:44.017425    7751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:44.021159  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:44.021171  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:46.585684  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:46.596993  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:46.597044  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:46.623772  167468 cri.go:89] found id: ""
	I1009 19:12:46.623793  167468 logs.go:282] 0 containers: []
	W1009 19:12:46.623800  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:46.623806  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:46.623856  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:46.652707  167468 cri.go:89] found id: ""
	I1009 19:12:46.652724  167468 logs.go:282] 0 containers: []
	W1009 19:12:46.652730  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:46.652736  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:46.652804  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:46.680752  167468 cri.go:89] found id: ""
	I1009 19:12:46.680770  167468 logs.go:282] 0 containers: []
	W1009 19:12:46.680780  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:46.680786  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:46.680849  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:46.708720  167468 cri.go:89] found id: ""
	I1009 19:12:46.708737  167468 logs.go:282] 0 containers: []
	W1009 19:12:46.708744  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:46.708750  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:46.708798  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:46.734857  167468 cri.go:89] found id: ""
	I1009 19:12:46.734873  167468 logs.go:282] 0 containers: []
	W1009 19:12:46.734880  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:46.734885  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:46.734930  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:46.762094  167468 cri.go:89] found id: ""
	I1009 19:12:46.762113  167468 logs.go:282] 0 containers: []
	W1009 19:12:46.762126  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:46.762133  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:46.762191  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:46.789680  167468 cri.go:89] found id: ""
	I1009 19:12:46.789700  167468 logs.go:282] 0 containers: []
	W1009 19:12:46.789708  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:46.789717  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:46.789728  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:46.861689  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:46.861711  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:46.874752  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:46.874775  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:46.934669  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:46.926336    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.926983    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.929273    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.929845    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.931396    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:46.926336    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.926983    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.929273    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.929845    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:46.931396    7868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:46.934679  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:46.934688  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:46.995061  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:46.995084  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:49.527642  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:49.538773  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:49.538828  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:49.566556  167468 cri.go:89] found id: ""
	I1009 19:12:49.566573  167468 logs.go:282] 0 containers: []
	W1009 19:12:49.566579  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:49.566584  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:49.566631  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:49.594280  167468 cri.go:89] found id: ""
	I1009 19:12:49.594297  167468 logs.go:282] 0 containers: []
	W1009 19:12:49.594304  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:49.594308  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:49.594360  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:49.622099  167468 cri.go:89] found id: ""
	I1009 19:12:49.622115  167468 logs.go:282] 0 containers: []
	W1009 19:12:49.622122  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:49.622127  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:49.622173  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:49.648411  167468 cri.go:89] found id: ""
	I1009 19:12:49.648430  167468 logs.go:282] 0 containers: []
	W1009 19:12:49.648437  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:49.648442  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:49.648506  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:49.676244  167468 cri.go:89] found id: ""
	I1009 19:12:49.676260  167468 logs.go:282] 0 containers: []
	W1009 19:12:49.676266  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:49.676272  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:49.676320  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:49.703539  167468 cri.go:89] found id: ""
	I1009 19:12:49.703555  167468 logs.go:282] 0 containers: []
	W1009 19:12:49.703562  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:49.703567  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:49.703617  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:49.730477  167468 cri.go:89] found id: ""
	I1009 19:12:49.730492  167468 logs.go:282] 0 containers: []
	W1009 19:12:49.730498  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:49.730508  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:49.730525  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:49.760658  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:49.760676  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:49.829075  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:49.829099  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:49.841535  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:49.841555  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:49.901305  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:49.892835    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.893403    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.895008    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.895583    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.896553    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:49.892835    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.893403    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.895008    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.895583    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:49.896553    7997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:49.901316  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:49.901327  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:52.467860  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:52.478990  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:52.479046  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:52.507725  167468 cri.go:89] found id: ""
	I1009 19:12:52.507745  167468 logs.go:282] 0 containers: []
	W1009 19:12:52.507753  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:52.507759  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:52.507817  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:52.535190  167468 cri.go:89] found id: ""
	I1009 19:12:52.535210  167468 logs.go:282] 0 containers: []
	W1009 19:12:52.535219  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:52.535226  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:52.535277  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:52.562492  167468 cri.go:89] found id: ""
	I1009 19:12:52.562508  167468 logs.go:282] 0 containers: []
	W1009 19:12:52.562515  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:52.562520  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:52.562570  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:52.590535  167468 cri.go:89] found id: ""
	I1009 19:12:52.590556  167468 logs.go:282] 0 containers: []
	W1009 19:12:52.590563  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:52.590568  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:52.590619  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:52.617794  167468 cri.go:89] found id: ""
	I1009 19:12:52.617811  167468 logs.go:282] 0 containers: []
	W1009 19:12:52.617817  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:52.617822  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:52.617871  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:52.645640  167468 cri.go:89] found id: ""
	I1009 19:12:52.645657  167468 logs.go:282] 0 containers: []
	W1009 19:12:52.645663  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:52.645668  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:52.645725  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:52.673077  167468 cri.go:89] found id: ""
	I1009 19:12:52.673099  167468 logs.go:282] 0 containers: []
	W1009 19:12:52.673109  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:52.673121  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:52.673134  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:52.685322  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:52.685343  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:52.744140  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:52.736205    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.736792    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.738405    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.738829    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.740529    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:52.736205    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.736792    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.738405    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.738829    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:52.740529    8111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:52.744151  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:52.744161  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:52.804313  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:52.804337  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:52.835400  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:52.835423  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:55.406701  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:55.418704  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:55.418764  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:55.446462  167468 cri.go:89] found id: ""
	I1009 19:12:55.446482  167468 logs.go:282] 0 containers: []
	W1009 19:12:55.446500  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:55.446507  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:55.446565  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:55.474996  167468 cri.go:89] found id: ""
	I1009 19:12:55.475012  167468 logs.go:282] 0 containers: []
	W1009 19:12:55.475021  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:55.475026  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:55.475071  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:55.501499  167468 cri.go:89] found id: ""
	I1009 19:12:55.501517  167468 logs.go:282] 0 containers: []
	W1009 19:12:55.501538  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:55.501548  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:55.501615  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:55.529250  167468 cri.go:89] found id: ""
	I1009 19:12:55.529266  167468 logs.go:282] 0 containers: []
	W1009 19:12:55.529273  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:55.529278  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:55.529331  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:55.557673  167468 cri.go:89] found id: ""
	I1009 19:12:55.557697  167468 logs.go:282] 0 containers: []
	W1009 19:12:55.557705  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:55.557711  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:55.557782  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:55.584821  167468 cri.go:89] found id: ""
	I1009 19:12:55.584837  167468 logs.go:282] 0 containers: []
	W1009 19:12:55.584844  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:55.584848  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:55.584896  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:55.610337  167468 cri.go:89] found id: ""
	I1009 19:12:55.610353  167468 logs.go:282] 0 containers: []
	W1009 19:12:55.610359  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:55.610367  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:55.610394  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:55.640837  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:55.640856  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:55.707303  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:55.707327  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:12:55.719504  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:55.719524  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:55.777237  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:55.769773    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.770229    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.771763    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.772256    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.773793    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:55.769773    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.770229    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.771763    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.772256    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:55.773793    8246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:55.777249  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:55.777260  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:58.340087  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:58.351165  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:12:58.351219  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:12:58.378091  167468 cri.go:89] found id: ""
	I1009 19:12:58.378108  167468 logs.go:282] 0 containers: []
	W1009 19:12:58.378114  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:12:58.378119  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:12:58.378169  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:12:58.407571  167468 cri.go:89] found id: ""
	I1009 19:12:58.407589  167468 logs.go:282] 0 containers: []
	W1009 19:12:58.407598  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:12:58.407604  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:12:58.407653  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:12:58.436553  167468 cri.go:89] found id: ""
	I1009 19:12:58.436571  167468 logs.go:282] 0 containers: []
	W1009 19:12:58.436580  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:12:58.436586  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:12:58.436649  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:12:58.463773  167468 cri.go:89] found id: ""
	I1009 19:12:58.463789  167468 logs.go:282] 0 containers: []
	W1009 19:12:58.463795  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:12:58.463799  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:12:58.463859  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:12:58.490461  167468 cri.go:89] found id: ""
	I1009 19:12:58.490477  167468 logs.go:282] 0 containers: []
	W1009 19:12:58.490484  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:12:58.490488  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:12:58.490536  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:12:58.517574  167468 cri.go:89] found id: ""
	I1009 19:12:58.517591  167468 logs.go:282] 0 containers: []
	W1009 19:12:58.517598  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:12:58.517604  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:12:58.517653  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:12:58.544333  167468 cri.go:89] found id: ""
	I1009 19:12:58.544351  167468 logs.go:282] 0 containers: []
	W1009 19:12:58.544361  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:12:58.544371  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:12:58.544398  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:12:58.602923  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:12:58.594853    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.595424    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.596985    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.597443    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.599067    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:12:58.594853    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.595424    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.596985    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.597443    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:12:58.599067    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:12:58.602934  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:12:58.602949  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:12:58.666550  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:12:58.666572  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:12:58.696671  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:12:58.696690  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:12:58.763866  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:12:58.763888  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:01.277960  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:01.288975  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:01.289031  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:01.315640  167468 cri.go:89] found id: ""
	I1009 19:13:01.315656  167468 logs.go:282] 0 containers: []
	W1009 19:13:01.315694  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:01.315702  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:01.315763  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:01.344136  167468 cri.go:89] found id: ""
	I1009 19:13:01.344152  167468 logs.go:282] 0 containers: []
	W1009 19:13:01.344159  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:01.344164  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:01.344217  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:01.372892  167468 cri.go:89] found id: ""
	I1009 19:13:01.372907  167468 logs.go:282] 0 containers: []
	W1009 19:13:01.372914  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:01.372919  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:01.372973  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:01.399606  167468 cri.go:89] found id: ""
	I1009 19:13:01.399626  167468 logs.go:282] 0 containers: []
	W1009 19:13:01.399636  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:01.399643  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:01.399697  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:01.427550  167468 cri.go:89] found id: ""
	I1009 19:13:01.427570  167468 logs.go:282] 0 containers: []
	W1009 19:13:01.427581  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:01.427592  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:01.427647  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:01.454668  167468 cri.go:89] found id: ""
	I1009 19:13:01.454686  167468 logs.go:282] 0 containers: []
	W1009 19:13:01.454693  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:01.454698  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:01.454750  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:01.481897  167468 cri.go:89] found id: ""
	I1009 19:13:01.481916  167468 logs.go:282] 0 containers: []
	W1009 19:13:01.481926  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:01.481939  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:01.481955  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:01.555443  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:01.555466  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:01.567729  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:01.567749  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:01.627530  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:01.618960    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.620263    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.620839    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.622496    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.623021    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:01.618960    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.620263    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.620839    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.622496    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:01.623021    8477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:01.627544  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:01.627559  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:01.688247  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:01.688274  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:04.220134  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:04.231353  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:04.231446  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:04.258512  167468 cri.go:89] found id: ""
	I1009 19:13:04.258528  167468 logs.go:282] 0 containers: []
	W1009 19:13:04.258534  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:04.258539  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:04.258586  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:04.285536  167468 cri.go:89] found id: ""
	I1009 19:13:04.285552  167468 logs.go:282] 0 containers: []
	W1009 19:13:04.285558  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:04.285564  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:04.285612  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:04.314877  167468 cri.go:89] found id: ""
	I1009 19:13:04.314902  167468 logs.go:282] 0 containers: []
	W1009 19:13:04.314909  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:04.314914  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:04.314968  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:04.342074  167468 cri.go:89] found id: ""
	I1009 19:13:04.342091  167468 logs.go:282] 0 containers: []
	W1009 19:13:04.342101  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:04.342108  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:04.342168  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:04.367935  167468 cri.go:89] found id: ""
	I1009 19:13:04.367951  167468 logs.go:282] 0 containers: []
	W1009 19:13:04.367959  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:04.367964  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:04.368012  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:04.394817  167468 cri.go:89] found id: ""
	I1009 19:13:04.394837  167468 logs.go:282] 0 containers: []
	W1009 19:13:04.394846  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:04.394854  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:04.394919  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:04.421650  167468 cri.go:89] found id: ""
	I1009 19:13:04.421670  167468 logs.go:282] 0 containers: []
	W1009 19:13:04.421680  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:04.421691  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:04.421712  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:04.490071  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:04.490097  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:04.502160  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:04.502179  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:04.561004  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:04.553527    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.554086    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.555768    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.556209    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.557463    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:04.553527    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.554086    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.555768    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.556209    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:04.557463    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:04.561015  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:04.561026  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:04.627255  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:04.627292  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:07.159560  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:07.170893  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:07.170944  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:07.198061  167468 cri.go:89] found id: ""
	I1009 19:13:07.198081  167468 logs.go:282] 0 containers: []
	W1009 19:13:07.198088  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:07.198094  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:07.198144  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:07.226131  167468 cri.go:89] found id: ""
	I1009 19:13:07.226150  167468 logs.go:282] 0 containers: []
	W1009 19:13:07.226157  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:07.226162  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:07.226220  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:07.254150  167468 cri.go:89] found id: ""
	I1009 19:13:07.254171  167468 logs.go:282] 0 containers: []
	W1009 19:13:07.254181  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:07.254188  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:07.254244  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:07.281984  167468 cri.go:89] found id: ""
	I1009 19:13:07.282004  167468 logs.go:282] 0 containers: []
	W1009 19:13:07.282015  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:07.282023  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:07.282087  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:07.309721  167468 cri.go:89] found id: ""
	I1009 19:13:07.309741  167468 logs.go:282] 0 containers: []
	W1009 19:13:07.309747  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:07.309752  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:07.309807  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:07.336611  167468 cri.go:89] found id: ""
	I1009 19:13:07.336629  167468 logs.go:282] 0 containers: []
	W1009 19:13:07.336636  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:07.336641  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:07.336698  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:07.363039  167468 cri.go:89] found id: ""
	I1009 19:13:07.363059  167468 logs.go:282] 0 containers: []
	W1009 19:13:07.363065  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:07.363074  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:07.363084  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:07.433229  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:07.433254  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:07.445762  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:07.445782  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:07.506602  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:07.497036    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.497750    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.499446    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.501191    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.501817    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:07.497036    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.497750    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.499446    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.501191    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:07.501817    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:07.506621  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:07.506637  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:07.570528  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:07.570555  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:10.103498  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:10.114559  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:10.114618  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:10.140877  167468 cri.go:89] found id: ""
	I1009 19:13:10.140904  167468 logs.go:282] 0 containers: []
	W1009 19:13:10.140915  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:10.140921  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:10.140976  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:10.167893  167468 cri.go:89] found id: ""
	I1009 19:13:10.167928  167468 logs.go:282] 0 containers: []
	W1009 19:13:10.167938  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:10.167945  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:10.168001  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:10.195691  167468 cri.go:89] found id: ""
	I1009 19:13:10.195708  167468 logs.go:282] 0 containers: []
	W1009 19:13:10.195737  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:10.195744  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:10.195806  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:10.222647  167468 cri.go:89] found id: ""
	I1009 19:13:10.222665  167468 logs.go:282] 0 containers: []
	W1009 19:13:10.222671  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:10.222677  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:10.222729  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:10.249706  167468 cri.go:89] found id: ""
	I1009 19:13:10.249725  167468 logs.go:282] 0 containers: []
	W1009 19:13:10.249735  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:10.249741  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:10.249805  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:10.277282  167468 cri.go:89] found id: ""
	I1009 19:13:10.277302  167468 logs.go:282] 0 containers: []
	W1009 19:13:10.277311  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:10.277317  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:10.277395  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:10.305128  167468 cri.go:89] found id: ""
	I1009 19:13:10.305144  167468 logs.go:282] 0 containers: []
	W1009 19:13:10.305151  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:10.305159  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:10.305171  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:10.366874  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:10.359143    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.359783    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.361001    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.361659    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.363247    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:10.359143    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.359783    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.361001    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.361659    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:10.363247    8834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:10.366887  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:10.366899  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:10.431608  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:10.431633  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:10.463358  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:10.463402  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:10.531897  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:10.531921  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:13.047007  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:13.058221  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:13.058285  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:13.086231  167468 cri.go:89] found id: ""
	I1009 19:13:13.086259  167468 logs.go:282] 0 containers: []
	W1009 19:13:13.086266  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:13.086272  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:13.086326  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:13.111982  167468 cri.go:89] found id: ""
	I1009 19:13:13.111999  167468 logs.go:282] 0 containers: []
	W1009 19:13:13.112006  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:13.112011  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:13.112068  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:13.138979  167468 cri.go:89] found id: ""
	I1009 19:13:13.139004  167468 logs.go:282] 0 containers: []
	W1009 19:13:13.139011  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:13.139016  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:13.139067  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:13.167881  167468 cri.go:89] found id: ""
	I1009 19:13:13.167902  167468 logs.go:282] 0 containers: []
	W1009 19:13:13.167913  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:13.167920  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:13.167974  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:13.197025  167468 cri.go:89] found id: ""
	I1009 19:13:13.197040  167468 logs.go:282] 0 containers: []
	W1009 19:13:13.197047  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:13.197052  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:13.197110  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:13.224797  167468 cri.go:89] found id: ""
	I1009 19:13:13.224813  167468 logs.go:282] 0 containers: []
	W1009 19:13:13.224819  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:13.224824  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:13.224868  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:13.251310  167468 cri.go:89] found id: ""
	I1009 19:13:13.251329  167468 logs.go:282] 0 containers: []
	W1009 19:13:13.251339  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:13.251351  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:13.251370  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:13.263868  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:13.263890  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:13.322120  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:13.314752    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.315273    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.316869    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.317321    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.318642    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:13.314752    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.315273    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.316869    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.317321    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:13.318642    8962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:13.322130  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:13.322141  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:13.386957  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:13.386982  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:13.419121  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:13.419142  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:15.986307  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:15.997455  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:15.997514  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:16.023786  167468 cri.go:89] found id: ""
	I1009 19:13:16.023803  167468 logs.go:282] 0 containers: []
	W1009 19:13:16.023810  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:16.023815  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:16.023862  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:16.051180  167468 cri.go:89] found id: ""
	I1009 19:13:16.051201  167468 logs.go:282] 0 containers: []
	W1009 19:13:16.051211  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:16.051218  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:16.051269  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:16.078469  167468 cri.go:89] found id: ""
	I1009 19:13:16.078489  167468 logs.go:282] 0 containers: []
	W1009 19:13:16.078501  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:16.078507  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:16.078570  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:16.106922  167468 cri.go:89] found id: ""
	I1009 19:13:16.106942  167468 logs.go:282] 0 containers: []
	W1009 19:13:16.106949  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:16.106953  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:16.107015  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:16.134957  167468 cri.go:89] found id: ""
	I1009 19:13:16.134974  167468 logs.go:282] 0 containers: []
	W1009 19:13:16.134985  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:16.134990  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:16.135038  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:16.162970  167468 cri.go:89] found id: ""
	I1009 19:13:16.162986  167468 logs.go:282] 0 containers: []
	W1009 19:13:16.162992  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:16.162997  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:16.163062  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:16.190741  167468 cri.go:89] found id: ""
	I1009 19:13:16.190759  167468 logs.go:282] 0 containers: []
	W1009 19:13:16.190773  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:16.190782  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:16.190793  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:16.256749  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:16.256775  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:16.268841  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:16.268862  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:16.328040  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:16.319195    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.319979    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.321948    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.322864    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.323494    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:16.319195    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.319979    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.321948    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.322864    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:16.323494    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:16.328057  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:16.328070  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:16.391596  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:16.391621  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:18.923965  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:18.935342  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:18.935407  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:18.963928  167468 cri.go:89] found id: ""
	I1009 19:13:18.963948  167468 logs.go:282] 0 containers: []
	W1009 19:13:18.963954  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:18.963959  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:18.964008  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:18.991109  167468 cri.go:89] found id: ""
	I1009 19:13:18.991125  167468 logs.go:282] 0 containers: []
	W1009 19:13:18.991131  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:18.991136  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:18.991183  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:19.018365  167468 cri.go:89] found id: ""
	I1009 19:13:19.018402  167468 logs.go:282] 0 containers: []
	W1009 19:13:19.018412  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:19.018418  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:19.018469  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:19.045613  167468 cri.go:89] found id: ""
	I1009 19:13:19.045629  167468 logs.go:282] 0 containers: []
	W1009 19:13:19.045638  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:19.045645  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:19.045705  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:19.073406  167468 cri.go:89] found id: ""
	I1009 19:13:19.073425  167468 logs.go:282] 0 containers: []
	W1009 19:13:19.073432  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:19.073437  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:19.073492  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:19.100393  167468 cri.go:89] found id: ""
	I1009 19:13:19.100412  167468 logs.go:282] 0 containers: []
	W1009 19:13:19.100418  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:19.100423  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:19.100471  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:19.126851  167468 cri.go:89] found id: ""
	I1009 19:13:19.126867  167468 logs.go:282] 0 containers: []
	W1009 19:13:19.126873  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:19.126880  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:19.126892  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:19.187263  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:19.179205    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.180148    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.181817    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.182282    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.183463    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:19.179205    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.180148    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.181817    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.182282    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:19.183463    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:19.187275  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:19.187287  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:19.249235  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:19.249260  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:19.280761  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:19.280782  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:19.348861  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:19.348882  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:21.863867  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:21.875320  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:21.875402  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:21.901142  167468 cri.go:89] found id: ""
	I1009 19:13:21.901162  167468 logs.go:282] 0 containers: []
	W1009 19:13:21.901172  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:21.901179  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:21.901245  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:21.928133  167468 cri.go:89] found id: ""
	I1009 19:13:21.928152  167468 logs.go:282] 0 containers: []
	W1009 19:13:21.928158  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:21.928164  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:21.928212  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:21.955553  167468 cri.go:89] found id: ""
	I1009 19:13:21.955569  167468 logs.go:282] 0 containers: []
	W1009 19:13:21.955576  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:21.955581  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:21.955629  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:21.983034  167468 cri.go:89] found id: ""
	I1009 19:13:21.983051  167468 logs.go:282] 0 containers: []
	W1009 19:13:21.983059  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:21.983066  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:21.983121  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:22.010710  167468 cri.go:89] found id: ""
	I1009 19:13:22.010728  167468 logs.go:282] 0 containers: []
	W1009 19:13:22.010736  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:22.010741  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:22.010806  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:22.036790  167468 cri.go:89] found id: ""
	I1009 19:13:22.036806  167468 logs.go:282] 0 containers: []
	W1009 19:13:22.036813  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:22.036818  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:22.036863  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:22.063811  167468 cri.go:89] found id: ""
	I1009 19:13:22.063829  167468 logs.go:282] 0 containers: []
	W1009 19:13:22.063835  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:22.063844  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:22.063853  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:22.130862  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:22.130888  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:22.143167  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:22.143188  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:22.204009  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:22.195809    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.196397    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.198003    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.198478    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.200063    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:22.195809    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.196397    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.198003    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.198478    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:22.200063    9341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:22.204024  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:22.204036  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:22.268771  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:22.268794  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:24.801350  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:24.812363  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:24.812431  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:24.840646  167468 cri.go:89] found id: ""
	I1009 19:13:24.840663  167468 logs.go:282] 0 containers: []
	W1009 19:13:24.840671  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:24.840677  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:24.840739  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:24.867359  167468 cri.go:89] found id: ""
	I1009 19:13:24.867392  167468 logs.go:282] 0 containers: []
	W1009 19:13:24.867402  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:24.867409  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:24.867470  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:24.894684  167468 cri.go:89] found id: ""
	I1009 19:13:24.894701  167468 logs.go:282] 0 containers: []
	W1009 19:13:24.894707  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:24.894712  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:24.894761  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:24.922658  167468 cri.go:89] found id: ""
	I1009 19:13:24.922678  167468 logs.go:282] 0 containers: []
	W1009 19:13:24.922688  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:24.922694  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:24.922751  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:24.949879  167468 cri.go:89] found id: ""
	I1009 19:13:24.949895  167468 logs.go:282] 0 containers: []
	W1009 19:13:24.949901  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:24.949906  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:24.949964  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:24.976423  167468 cri.go:89] found id: ""
	I1009 19:13:24.976441  167468 logs.go:282] 0 containers: []
	W1009 19:13:24.976450  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:24.976457  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:24.976512  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:25.002011  167468 cri.go:89] found id: ""
	I1009 19:13:25.002028  167468 logs.go:282] 0 containers: []
	W1009 19:13:25.002034  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:25.002042  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:25.002054  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:25.073024  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:25.073048  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:25.085208  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:25.085228  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:25.144068  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:25.136709    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.137237    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.138809    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.139304    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.140539    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:25.136709    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.137237    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.138809    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.139304    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:25.140539    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:25.144082  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:25.144098  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:25.208021  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:25.208044  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:27.740581  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:27.751702  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:27.751756  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:27.778066  167468 cri.go:89] found id: ""
	I1009 19:13:27.778082  167468 logs.go:282] 0 containers: []
	W1009 19:13:27.778088  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:27.778093  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:27.778139  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:27.806166  167468 cri.go:89] found id: ""
	I1009 19:13:27.806183  167468 logs.go:282] 0 containers: []
	W1009 19:13:27.806192  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:27.806198  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:27.806261  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:27.833747  167468 cri.go:89] found id: ""
	I1009 19:13:27.833783  167468 logs.go:282] 0 containers: []
	W1009 19:13:27.833793  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:27.833800  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:27.833859  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:27.861452  167468 cri.go:89] found id: ""
	I1009 19:13:27.861471  167468 logs.go:282] 0 containers: []
	W1009 19:13:27.861478  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:27.861482  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:27.861543  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:27.889001  167468 cri.go:89] found id: ""
	I1009 19:13:27.889017  167468 logs.go:282] 0 containers: []
	W1009 19:13:27.889023  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:27.889030  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:27.889090  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:27.915709  167468 cri.go:89] found id: ""
	I1009 19:13:27.915729  167468 logs.go:282] 0 containers: []
	W1009 19:13:27.915739  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:27.915746  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:27.915802  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:27.943121  167468 cri.go:89] found id: ""
	I1009 19:13:27.943140  167468 logs.go:282] 0 containers: []
	W1009 19:13:27.943146  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:27.943156  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:27.943167  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:28.010452  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:28.010475  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:28.022860  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:28.022878  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:28.080632  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:28.072836    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.073401    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.074954    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.075364    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.076931    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:28.072836    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.073401    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.074954    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.075364    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:28.076931    9582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:28.080645  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:28.080658  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:28.144679  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:28.144702  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:30.676105  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:30.687597  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:30.687649  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:30.714683  167468 cri.go:89] found id: ""
	I1009 19:13:30.714700  167468 logs.go:282] 0 containers: []
	W1009 19:13:30.714707  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:30.714712  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:30.714776  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:30.742271  167468 cri.go:89] found id: ""
	I1009 19:13:30.742292  167468 logs.go:282] 0 containers: []
	W1009 19:13:30.742301  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:30.742308  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:30.742397  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:30.769357  167468 cri.go:89] found id: ""
	I1009 19:13:30.769388  167468 logs.go:282] 0 containers: []
	W1009 19:13:30.769397  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:30.769404  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:30.769463  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:30.795938  167468 cri.go:89] found id: ""
	I1009 19:13:30.795955  167468 logs.go:282] 0 containers: []
	W1009 19:13:30.795962  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:30.795968  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:30.796029  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:30.821704  167468 cri.go:89] found id: ""
	I1009 19:13:30.821726  167468 logs.go:282] 0 containers: []
	W1009 19:13:30.821736  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:30.821743  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:30.821813  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:30.848828  167468 cri.go:89] found id: ""
	I1009 19:13:30.848847  167468 logs.go:282] 0 containers: []
	W1009 19:13:30.848853  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:30.848859  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:30.848906  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:30.876298  167468 cri.go:89] found id: ""
	I1009 19:13:30.876318  167468 logs.go:282] 0 containers: []
	W1009 19:13:30.876328  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:30.876338  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:30.876357  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:30.947427  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:30.947451  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:30.959445  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:30.959462  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:31.017292  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:31.009627    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.010482    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.011538    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.012034    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.013579    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:31.009627    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.010482    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.011538    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.012034    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:31.013579    9718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:31.017303  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:31.017318  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:31.080462  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:31.080485  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:33.612293  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:33.623432  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:33.623482  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:33.650758  167468 cri.go:89] found id: ""
	I1009 19:13:33.650776  167468 logs.go:282] 0 containers: []
	W1009 19:13:33.650783  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:33.650789  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:33.650844  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:33.678965  167468 cri.go:89] found id: ""
	I1009 19:13:33.678981  167468 logs.go:282] 0 containers: []
	W1009 19:13:33.678988  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:33.678992  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:33.679068  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:33.709733  167468 cri.go:89] found id: ""
	I1009 19:13:33.709754  167468 logs.go:282] 0 containers: []
	W1009 19:13:33.709762  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:33.709769  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:33.709899  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:33.740843  167468 cri.go:89] found id: ""
	I1009 19:13:33.740860  167468 logs.go:282] 0 containers: []
	W1009 19:13:33.740867  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:33.740872  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:33.740923  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:33.768607  167468 cri.go:89] found id: ""
	I1009 19:13:33.768624  167468 logs.go:282] 0 containers: []
	W1009 19:13:33.768631  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:33.768636  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:33.768685  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:33.795766  167468 cri.go:89] found id: ""
	I1009 19:13:33.795783  167468 logs.go:282] 0 containers: []
	W1009 19:13:33.795790  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:33.795796  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:33.795851  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:33.824447  167468 cri.go:89] found id: ""
	I1009 19:13:33.824468  167468 logs.go:282] 0 containers: []
	W1009 19:13:33.824477  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:33.824489  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:33.824505  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:33.886369  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:33.878113    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.878720    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.880311    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.880950    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.882576    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:33.878113    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.878720    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.880311    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.880950    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:33.882576    9834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:33.886403  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:33.886419  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:33.948841  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:33.948874  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:33.980307  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:33.980330  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:34.048912  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:34.048944  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:36.564162  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:36.576125  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:36.576178  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:36.604219  167468 cri.go:89] found id: ""
	I1009 19:13:36.604235  167468 logs.go:282] 0 containers: []
	W1009 19:13:36.604242  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:36.604246  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:36.604297  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:36.631435  167468 cri.go:89] found id: ""
	I1009 19:13:36.631455  167468 logs.go:282] 0 containers: []
	W1009 19:13:36.631463  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:36.631468  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:36.631522  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:36.658905  167468 cri.go:89] found id: ""
	I1009 19:13:36.658925  167468 logs.go:282] 0 containers: []
	W1009 19:13:36.658932  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:36.658941  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:36.659003  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:36.687919  167468 cri.go:89] found id: ""
	I1009 19:13:36.687941  167468 logs.go:282] 0 containers: []
	W1009 19:13:36.687948  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:36.687963  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:36.688010  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:36.715354  167468 cri.go:89] found id: ""
	I1009 19:13:36.715372  167468 logs.go:282] 0 containers: []
	W1009 19:13:36.715398  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:36.715405  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:36.715466  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:36.743207  167468 cri.go:89] found id: ""
	I1009 19:13:36.743224  167468 logs.go:282] 0 containers: []
	W1009 19:13:36.743238  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:36.743243  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:36.743291  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:36.770612  167468 cri.go:89] found id: ""
	I1009 19:13:36.770629  167468 logs.go:282] 0 containers: []
	W1009 19:13:36.770636  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:36.770645  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:36.770656  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:36.836830  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:36.836856  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:36.849433  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:36.849452  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:36.908266  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:36.900497    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.901238    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.902808    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.903266    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.904594    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:36.900497    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.901238    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.902808    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.903266    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:36.904594    9960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:36.908283  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:36.908297  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:36.975244  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:36.975275  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:39.505862  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:39.516820  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:39.516888  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:39.543164  167468 cri.go:89] found id: ""
	I1009 19:13:39.543180  167468 logs.go:282] 0 containers: []
	W1009 19:13:39.543186  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:39.543191  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:39.543240  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:39.569192  167468 cri.go:89] found id: ""
	I1009 19:13:39.569212  167468 logs.go:282] 0 containers: []
	W1009 19:13:39.569221  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:39.569227  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:39.569287  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:39.596196  167468 cri.go:89] found id: ""
	I1009 19:13:39.596213  167468 logs.go:282] 0 containers: []
	W1009 19:13:39.596219  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:39.596224  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:39.596271  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:39.622067  167468 cri.go:89] found id: ""
	I1009 19:13:39.622087  167468 logs.go:282] 0 containers: []
	W1009 19:13:39.622093  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:39.622098  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:39.622152  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:39.649128  167468 cri.go:89] found id: ""
	I1009 19:13:39.649145  167468 logs.go:282] 0 containers: []
	W1009 19:13:39.649151  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:39.649156  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:39.649202  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:39.674991  167468 cri.go:89] found id: ""
	I1009 19:13:39.675010  167468 logs.go:282] 0 containers: []
	W1009 19:13:39.675020  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:39.675027  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:39.675129  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:39.702254  167468 cri.go:89] found id: ""
	I1009 19:13:39.702274  167468 logs.go:282] 0 containers: []
	W1009 19:13:39.702284  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:39.702295  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:39.702307  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:39.774369  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:39.774400  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:39.786946  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:39.786967  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:39.846655  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:39.839086   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.839592   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.841208   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.841703   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.843295   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:39.839086   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.839592   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.841208   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.841703   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:39.843295   10086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:39.846669  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:39.846682  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:39.910311  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:39.910334  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:42.443183  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:42.454133  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:42.454185  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:42.481698  167468 cri.go:89] found id: ""
	I1009 19:13:42.481718  167468 logs.go:282] 0 containers: []
	W1009 19:13:42.481727  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:42.481733  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:42.481786  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:42.508494  167468 cri.go:89] found id: ""
	I1009 19:13:42.508514  167468 logs.go:282] 0 containers: []
	W1009 19:13:42.508524  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:42.508531  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:42.508585  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:42.535987  167468 cri.go:89] found id: ""
	I1009 19:13:42.536004  167468 logs.go:282] 0 containers: []
	W1009 19:13:42.536025  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:42.536034  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:42.536096  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:42.563210  167468 cri.go:89] found id: ""
	I1009 19:13:42.563227  167468 logs.go:282] 0 containers: []
	W1009 19:13:42.563234  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:42.563239  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:42.563285  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:42.590575  167468 cri.go:89] found id: ""
	I1009 19:13:42.590592  167468 logs.go:282] 0 containers: []
	W1009 19:13:42.590598  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:42.590603  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:42.590649  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:42.616425  167468 cri.go:89] found id: ""
	I1009 19:13:42.616440  167468 logs.go:282] 0 containers: []
	W1009 19:13:42.616446  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:42.616451  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:42.616494  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:42.644221  167468 cri.go:89] found id: ""
	I1009 19:13:42.644239  167468 logs.go:282] 0 containers: []
	W1009 19:13:42.644248  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:42.644259  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:42.644272  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:42.712601  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:42.712623  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:42.724833  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:42.724851  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:42.782650  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:42.775609   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.776076   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.777677   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.778114   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.779450   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:42.775609   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.776076   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.777677   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.778114   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:42.779450   10205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:42.782664  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:42.782682  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:42.846741  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:42.846763  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:45.378614  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:45.389636  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:45.389712  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:45.415855  167468 cri.go:89] found id: ""
	I1009 19:13:45.415873  167468 logs.go:282] 0 containers: []
	W1009 19:13:45.415880  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:45.415886  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:45.415934  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:45.444082  167468 cri.go:89] found id: ""
	I1009 19:13:45.444099  167468 logs.go:282] 0 containers: []
	W1009 19:13:45.444106  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:45.444111  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:45.444159  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:45.470687  167468 cri.go:89] found id: ""
	I1009 19:13:45.470707  167468 logs.go:282] 0 containers: []
	W1009 19:13:45.470718  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:45.470725  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:45.470780  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:45.499546  167468 cri.go:89] found id: ""
	I1009 19:13:45.499563  167468 logs.go:282] 0 containers: []
	W1009 19:13:45.499569  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:45.499580  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:45.499627  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:45.527809  167468 cri.go:89] found id: ""
	I1009 19:13:45.527828  167468 logs.go:282] 0 containers: []
	W1009 19:13:45.527837  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:45.527843  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:45.527895  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:45.555994  167468 cri.go:89] found id: ""
	I1009 19:13:45.556012  167468 logs.go:282] 0 containers: []
	W1009 19:13:45.556022  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:45.556030  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:45.556162  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:45.583148  167468 cri.go:89] found id: ""
	I1009 19:13:45.583165  167468 logs.go:282] 0 containers: []
	W1009 19:13:45.583171  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:45.583180  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:45.583191  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:45.653733  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:45.653757  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:45.665821  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:45.665842  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:45.723605  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:45.715791   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.716399   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.718036   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.718509   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.719963   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:45.715791   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.716399   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.718036   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.718509   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:45.719963   10333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:45.723618  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:45.723632  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:45.785630  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:45.785651  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:48.317201  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:48.328498  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:48.328563  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:48.356507  167468 cri.go:89] found id: ""
	I1009 19:13:48.356526  167468 logs.go:282] 0 containers: []
	W1009 19:13:48.356534  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:48.356542  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:48.356604  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:48.385398  167468 cri.go:89] found id: ""
	I1009 19:13:48.385416  167468 logs.go:282] 0 containers: []
	W1009 19:13:48.385422  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:48.385427  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:48.385477  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:48.412259  167468 cri.go:89] found id: ""
	I1009 19:13:48.412276  167468 logs.go:282] 0 containers: []
	W1009 19:13:48.412284  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:48.412289  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:48.412339  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:48.440453  167468 cri.go:89] found id: ""
	I1009 19:13:48.440471  167468 logs.go:282] 0 containers: []
	W1009 19:13:48.440479  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:48.440486  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:48.440549  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:48.469351  167468 cri.go:89] found id: ""
	I1009 19:13:48.469367  167468 logs.go:282] 0 containers: []
	W1009 19:13:48.469374  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:48.469396  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:48.469457  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:48.498335  167468 cri.go:89] found id: ""
	I1009 19:13:48.498349  167468 logs.go:282] 0 containers: []
	W1009 19:13:48.498355  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:48.498360  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:48.498424  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:48.525258  167468 cri.go:89] found id: ""
	I1009 19:13:48.525275  167468 logs.go:282] 0 containers: []
	W1009 19:13:48.525282  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:48.525292  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:48.525307  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:48.590425  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:48.590448  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:48.602233  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:48.602252  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:48.660259  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:48.653067   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.653655   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.655299   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.655831   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.656956   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:48.653067   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.653655   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.655299   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.655831   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:48.656956   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:48.660269  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:48.660281  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:48.724597  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:48.724621  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:51.257337  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:51.269111  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:51.269166  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:51.296195  167468 cri.go:89] found id: ""
	I1009 19:13:51.296210  167468 logs.go:282] 0 containers: []
	W1009 19:13:51.296216  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:51.296221  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:51.296282  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:51.322519  167468 cri.go:89] found id: ""
	I1009 19:13:51.322536  167468 logs.go:282] 0 containers: []
	W1009 19:13:51.322542  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:51.322547  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:51.322594  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:51.349587  167468 cri.go:89] found id: ""
	I1009 19:13:51.349603  167468 logs.go:282] 0 containers: []
	W1009 19:13:51.349609  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:51.349614  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:51.349667  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:51.377783  167468 cri.go:89] found id: ""
	I1009 19:13:51.377801  167468 logs.go:282] 0 containers: []
	W1009 19:13:51.377809  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:51.377814  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:51.377865  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:51.404656  167468 cri.go:89] found id: ""
	I1009 19:13:51.404672  167468 logs.go:282] 0 containers: []
	W1009 19:13:51.404681  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:51.404688  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:51.404747  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:51.430810  167468 cri.go:89] found id: ""
	I1009 19:13:51.430826  167468 logs.go:282] 0 containers: []
	W1009 19:13:51.430832  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:51.430838  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:51.430896  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:51.457166  167468 cri.go:89] found id: ""
	I1009 19:13:51.457189  167468 logs.go:282] 0 containers: []
	W1009 19:13:51.457200  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:51.457211  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:51.457223  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:51.521965  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:51.521988  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:51.534521  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:51.534545  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:51.593719  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:51.585963   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.586439   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.588046   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.588481   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.590012   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:51.585963   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.586439   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.588046   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.588481   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:51.590012   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:51.593731  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:51.593740  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:51.654584  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:51.654606  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:54.187112  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:54.198337  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:54.198414  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:54.225550  167468 cri.go:89] found id: ""
	I1009 19:13:54.225570  167468 logs.go:282] 0 containers: []
	W1009 19:13:54.225584  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:54.225591  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:54.225639  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:54.252848  167468 cri.go:89] found id: ""
	I1009 19:13:54.252864  167468 logs.go:282] 0 containers: []
	W1009 19:13:54.252871  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:54.252876  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:54.252936  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:54.279625  167468 cri.go:89] found id: ""
	I1009 19:13:54.279642  167468 logs.go:282] 0 containers: []
	W1009 19:13:54.279648  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:54.279659  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:54.279715  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:54.307491  167468 cri.go:89] found id: ""
	I1009 19:13:54.307507  167468 logs.go:282] 0 containers: []
	W1009 19:13:54.307513  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:54.307518  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:54.307571  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:54.335023  167468 cri.go:89] found id: ""
	I1009 19:13:54.335048  167468 logs.go:282] 0 containers: []
	W1009 19:13:54.335056  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:54.335063  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:54.335121  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:54.362616  167468 cri.go:89] found id: ""
	I1009 19:13:54.362633  167468 logs.go:282] 0 containers: []
	W1009 19:13:54.362640  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:54.362645  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:54.362719  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:54.391155  167468 cri.go:89] found id: ""
	I1009 19:13:54.391175  167468 logs.go:282] 0 containers: []
	W1009 19:13:54.391186  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:54.391197  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:54.391212  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:54.452190  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:54.444274   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.444870   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.446625   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.447165   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.448804   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:54.444274   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.444870   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.446625   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.447165   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:54.448804   10694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:54.452204  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:54.452219  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:13:54.514282  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:54.514306  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:54.544238  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:54.544256  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:54.612145  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:54.612173  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:57.125509  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:57.136612  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:13:57.136699  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:13:57.162822  167468 cri.go:89] found id: ""
	I1009 19:13:57.162841  167468 logs.go:282] 0 containers: []
	W1009 19:13:57.162849  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:13:57.162854  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:13:57.162903  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:13:57.190000  167468 cri.go:89] found id: ""
	I1009 19:13:57.190018  167468 logs.go:282] 0 containers: []
	W1009 19:13:57.190025  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:13:57.190030  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:13:57.190077  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:13:57.217780  167468 cri.go:89] found id: ""
	I1009 19:13:57.217801  167468 logs.go:282] 0 containers: []
	W1009 19:13:57.217812  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:13:57.217819  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:13:57.217876  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:13:57.243876  167468 cri.go:89] found id: ""
	I1009 19:13:57.243898  167468 logs.go:282] 0 containers: []
	W1009 19:13:57.243908  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:13:57.243914  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:13:57.243976  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:13:57.270405  167468 cri.go:89] found id: ""
	I1009 19:13:57.270425  167468 logs.go:282] 0 containers: []
	W1009 19:13:57.270432  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:13:57.270437  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:13:57.270486  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:13:57.299825  167468 cri.go:89] found id: ""
	I1009 19:13:57.299841  167468 logs.go:282] 0 containers: []
	W1009 19:13:57.299848  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:13:57.299853  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:13:57.299914  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:13:57.327570  167468 cri.go:89] found id: ""
	I1009 19:13:57.327587  167468 logs.go:282] 0 containers: []
	W1009 19:13:57.327594  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:13:57.327603  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:13:57.327615  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:13:57.359019  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:13:57.359050  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:13:57.428142  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:13:57.428165  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:13:57.440563  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:13:57.440584  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:13:57.500538  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:13:57.492802   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.493421   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.495026   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.495441   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.497020   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:13:57.492802   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.493421   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.495026   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.495441   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:13:57.497020   10837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:13:57.500549  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:13:57.500567  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:00.065761  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:00.077245  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:00.077320  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:00.106125  167468 cri.go:89] found id: ""
	I1009 19:14:00.106140  167468 logs.go:282] 0 containers: []
	W1009 19:14:00.106146  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:00.106151  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:00.106202  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:00.134788  167468 cri.go:89] found id: ""
	I1009 19:14:00.134807  167468 logs.go:282] 0 containers: []
	W1009 19:14:00.134818  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:00.134824  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:00.134891  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:00.163060  167468 cri.go:89] found id: ""
	I1009 19:14:00.163076  167468 logs.go:282] 0 containers: []
	W1009 19:14:00.163082  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:00.163087  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:00.163135  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:00.192113  167468 cri.go:89] found id: ""
	I1009 19:14:00.192133  167468 logs.go:282] 0 containers: []
	W1009 19:14:00.192143  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:00.192149  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:00.192210  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:00.218783  167468 cri.go:89] found id: ""
	I1009 19:14:00.218804  167468 logs.go:282] 0 containers: []
	W1009 19:14:00.218811  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:00.218817  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:00.218868  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:00.246603  167468 cri.go:89] found id: ""
	I1009 19:14:00.246620  167468 logs.go:282] 0 containers: []
	W1009 19:14:00.246627  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:00.246632  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:00.246683  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:00.274697  167468 cri.go:89] found id: ""
	I1009 19:14:00.274713  167468 logs.go:282] 0 containers: []
	W1009 19:14:00.274719  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:00.274729  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:00.274739  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:00.287013  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:00.287030  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:00.348225  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:00.340024   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.340555   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.342294   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.342898   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.344448   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:00.340024   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.340555   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.342294   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.342898   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:00.344448   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:00.348243  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:00.348255  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:00.414970  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:00.415009  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:00.446010  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:00.446031  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:03.018679  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:03.030482  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:03.030538  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:03.059098  167468 cri.go:89] found id: ""
	I1009 19:14:03.059119  167468 logs.go:282] 0 containers: []
	W1009 19:14:03.059129  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:03.059137  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:03.059195  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:03.086255  167468 cri.go:89] found id: ""
	I1009 19:14:03.086273  167468 logs.go:282] 0 containers: []
	W1009 19:14:03.086279  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:03.086286  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:03.086351  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:03.113417  167468 cri.go:89] found id: ""
	I1009 19:14:03.113437  167468 logs.go:282] 0 containers: []
	W1009 19:14:03.113444  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:03.113450  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:03.113507  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:03.141043  167468 cri.go:89] found id: ""
	I1009 19:14:03.141064  167468 logs.go:282] 0 containers: []
	W1009 19:14:03.141073  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:03.141080  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:03.141139  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:03.168482  167468 cri.go:89] found id: ""
	I1009 19:14:03.168500  167468 logs.go:282] 0 containers: []
	W1009 19:14:03.168510  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:03.168515  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:03.168562  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:03.195613  167468 cri.go:89] found id: ""
	I1009 19:14:03.195634  167468 logs.go:282] 0 containers: []
	W1009 19:14:03.195640  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:03.195648  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:03.195700  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:03.223082  167468 cri.go:89] found id: ""
	I1009 19:14:03.223102  167468 logs.go:282] 0 containers: []
	W1009 19:14:03.223113  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:03.223126  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:03.223140  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:03.289799  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:03.289826  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:03.302088  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:03.302108  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:03.361951  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:03.354529   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.355199   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.356810   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.357258   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.358331   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:03.354529   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.355199   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.356810   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.357258   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:03.358331   11063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:03.361965  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:03.361976  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:03.424809  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:03.424834  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:05.957140  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:05.968183  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:05.968233  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:05.994237  167468 cri.go:89] found id: ""
	I1009 19:14:05.994255  167468 logs.go:282] 0 containers: []
	W1009 19:14:05.994263  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:05.994268  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:05.994316  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:06.023106  167468 cri.go:89] found id: ""
	I1009 19:14:06.023124  167468 logs.go:282] 0 containers: []
	W1009 19:14:06.023131  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:06.023136  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:06.023194  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:06.049764  167468 cri.go:89] found id: ""
	I1009 19:14:06.049780  167468 logs.go:282] 0 containers: []
	W1009 19:14:06.049786  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:06.049790  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:06.049838  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:06.077023  167468 cri.go:89] found id: ""
	I1009 19:14:06.077038  167468 logs.go:282] 0 containers: []
	W1009 19:14:06.077044  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:06.077049  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:06.077097  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:06.105013  167468 cri.go:89] found id: ""
	I1009 19:14:06.105029  167468 logs.go:282] 0 containers: []
	W1009 19:14:06.105035  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:06.105040  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:06.105089  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:06.132736  167468 cri.go:89] found id: ""
	I1009 19:14:06.132754  167468 logs.go:282] 0 containers: []
	W1009 19:14:06.132761  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:06.132766  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:06.132813  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:06.160441  167468 cri.go:89] found id: ""
	I1009 19:14:06.160459  167468 logs.go:282] 0 containers: []
	W1009 19:14:06.160467  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:06.160477  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:06.160493  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:06.230865  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:06.230891  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:06.243543  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:06.243563  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:06.302803  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:06.294756   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.295321   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.296956   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.297533   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.299112   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:06.294756   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.295321   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.296956   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.297533   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:06.299112   11191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:06.302821  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:06.302836  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:06.363249  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:06.363274  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:08.896321  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:08.907567  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:08.907629  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:08.935200  167468 cri.go:89] found id: ""
	I1009 19:14:08.935217  167468 logs.go:282] 0 containers: []
	W1009 19:14:08.935224  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:08.935229  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:08.935279  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:08.962910  167468 cri.go:89] found id: ""
	I1009 19:14:08.962930  167468 logs.go:282] 0 containers: []
	W1009 19:14:08.962939  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:08.962945  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:08.963017  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:08.990218  167468 cri.go:89] found id: ""
	I1009 19:14:08.990235  167468 logs.go:282] 0 containers: []
	W1009 19:14:08.990252  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:08.990258  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:08.990306  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:09.015799  167468 cri.go:89] found id: ""
	I1009 19:14:09.015815  167468 logs.go:282] 0 containers: []
	W1009 19:14:09.015822  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:09.015826  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:09.015875  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:09.042470  167468 cri.go:89] found id: ""
	I1009 19:14:09.042485  167468 logs.go:282] 0 containers: []
	W1009 19:14:09.042492  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:09.042497  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:09.042553  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:09.069980  167468 cri.go:89] found id: ""
	I1009 19:14:09.069996  167468 logs.go:282] 0 containers: []
	W1009 19:14:09.070006  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:09.070011  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:09.070062  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:09.097327  167468 cri.go:89] found id: ""
	I1009 19:14:09.097347  167468 logs.go:282] 0 containers: []
	W1009 19:14:09.097358  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:09.097369  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:09.097395  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:09.166588  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:09.166613  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:09.179033  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:09.179053  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:09.237875  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:09.230485   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.231039   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.232636   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.233112   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.234282   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:09.230485   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.231039   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.232636   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.233112   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:09.234282   11317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:09.237886  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:09.237896  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:09.297149  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:09.297173  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:11.829632  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:11.841003  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:11.841054  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:11.868151  167468 cri.go:89] found id: ""
	I1009 19:14:11.868168  167468 logs.go:282] 0 containers: []
	W1009 19:14:11.868175  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:11.868181  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:11.868229  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:11.894303  167468 cri.go:89] found id: ""
	I1009 19:14:11.894319  167468 logs.go:282] 0 containers: []
	W1009 19:14:11.894325  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:11.894333  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:11.894406  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:11.921553  167468 cri.go:89] found id: ""
	I1009 19:14:11.921569  167468 logs.go:282] 0 containers: []
	W1009 19:14:11.921576  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:11.921582  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:11.921640  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:11.948362  167468 cri.go:89] found id: ""
	I1009 19:14:11.948392  167468 logs.go:282] 0 containers: []
	W1009 19:14:11.948404  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:11.948410  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:11.948463  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:11.975053  167468 cri.go:89] found id: ""
	I1009 19:14:11.975074  167468 logs.go:282] 0 containers: []
	W1009 19:14:11.975082  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:11.975090  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:11.975147  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:12.002192  167468 cri.go:89] found id: ""
	I1009 19:14:12.002206  167468 logs.go:282] 0 containers: []
	W1009 19:14:12.002212  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:12.002217  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:12.002263  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:12.029135  167468 cri.go:89] found id: ""
	I1009 19:14:12.029150  167468 logs.go:282] 0 containers: []
	W1009 19:14:12.029156  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:12.029165  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:12.029231  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:12.089147  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:12.089168  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:12.123009  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:12.123029  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:12.194542  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:12.194566  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:12.207426  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:12.207447  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:12.268201  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:12.260274   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.260836   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.262548   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.263082   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.264595   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:12.260274   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.260836   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.262548   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.263082   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:12.264595   11469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:14.768939  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:14.779994  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:14.780055  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:14.806625  167468 cri.go:89] found id: ""
	I1009 19:14:14.806642  167468 logs.go:282] 0 containers: []
	W1009 19:14:14.806648  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:14.806653  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:14.806709  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:14.834144  167468 cri.go:89] found id: ""
	I1009 19:14:14.834161  167468 logs.go:282] 0 containers: []
	W1009 19:14:14.834168  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:14.834173  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:14.834217  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:14.859842  167468 cri.go:89] found id: ""
	I1009 19:14:14.859857  167468 logs.go:282] 0 containers: []
	W1009 19:14:14.859863  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:14.859868  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:14.859915  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:14.886983  167468 cri.go:89] found id: ""
	I1009 19:14:14.887002  167468 logs.go:282] 0 containers: []
	W1009 19:14:14.887011  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:14.887017  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:14.887077  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:14.915279  167468 cri.go:89] found id: ""
	I1009 19:14:14.915297  167468 logs.go:282] 0 containers: []
	W1009 19:14:14.915304  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:14.915310  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:14.915367  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:14.943496  167468 cri.go:89] found id: ""
	I1009 19:14:14.943515  167468 logs.go:282] 0 containers: []
	W1009 19:14:14.943522  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:14.943527  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:14.943576  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:14.971449  167468 cri.go:89] found id: ""
	I1009 19:14:14.971466  167468 logs.go:282] 0 containers: []
	W1009 19:14:14.971472  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:14.971481  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:14.971492  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:15.002283  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:15.002302  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:15.068728  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:15.068752  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:15.080899  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:15.080916  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:15.141200  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:15.133517   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.134060   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.135645   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.136103   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.137648   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:15.133517   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.134060   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.135645   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.136103   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:15.137648   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:15.141211  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:15.141222  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:17.703757  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:17.715432  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:17.715488  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:17.742801  167468 cri.go:89] found id: ""
	I1009 19:14:17.742818  167468 logs.go:282] 0 containers: []
	W1009 19:14:17.742825  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:17.742831  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:17.742894  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:17.770041  167468 cri.go:89] found id: ""
	I1009 19:14:17.770058  167468 logs.go:282] 0 containers: []
	W1009 19:14:17.770067  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:17.770074  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:17.770123  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:17.798373  167468 cri.go:89] found id: ""
	I1009 19:14:17.798401  167468 logs.go:282] 0 containers: []
	W1009 19:14:17.798410  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:17.798416  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:17.798467  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:17.826589  167468 cri.go:89] found id: ""
	I1009 19:14:17.826607  167468 logs.go:282] 0 containers: []
	W1009 19:14:17.826613  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:17.826619  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:17.826668  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:17.853849  167468 cri.go:89] found id: ""
	I1009 19:14:17.853870  167468 logs.go:282] 0 containers: []
	W1009 19:14:17.853879  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:17.853886  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:17.853940  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:17.880708  167468 cri.go:89] found id: ""
	I1009 19:14:17.880728  167468 logs.go:282] 0 containers: []
	W1009 19:14:17.880738  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:17.880745  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:17.880801  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:17.907949  167468 cri.go:89] found id: ""
	I1009 19:14:17.907970  167468 logs.go:282] 0 containers: []
	W1009 19:14:17.907980  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:17.907990  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:17.908000  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:17.977368  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:17.977398  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:17.989589  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:17.989607  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:18.048403  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:18.040956   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.041628   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.043275   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.043797   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.044915   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:18.040956   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.041628   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.043275   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.043797   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:18.044915   11691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:18.048425  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:18.048436  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:18.109745  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:18.109768  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:20.641770  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:20.652651  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:20.652706  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:20.680068  167468 cri.go:89] found id: ""
	I1009 19:14:20.680087  167468 logs.go:282] 0 containers: []
	W1009 19:14:20.680097  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:20.680104  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:20.680154  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:20.707239  167468 cri.go:89] found id: ""
	I1009 19:14:20.707258  167468 logs.go:282] 0 containers: []
	W1009 19:14:20.707265  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:20.707270  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:20.707326  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:20.735326  167468 cri.go:89] found id: ""
	I1009 19:14:20.735344  167468 logs.go:282] 0 containers: []
	W1009 19:14:20.735354  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:20.735361  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:20.735435  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:20.761699  167468 cri.go:89] found id: ""
	I1009 19:14:20.761716  167468 logs.go:282] 0 containers: []
	W1009 19:14:20.761723  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:20.761730  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:20.761779  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:20.789487  167468 cri.go:89] found id: ""
	I1009 19:14:20.789503  167468 logs.go:282] 0 containers: []
	W1009 19:14:20.789510  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:20.789515  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:20.789564  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:20.815048  167468 cri.go:89] found id: ""
	I1009 19:14:20.815068  167468 logs.go:282] 0 containers: []
	W1009 19:14:20.815077  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:20.815085  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:20.815133  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:20.841854  167468 cri.go:89] found id: ""
	I1009 19:14:20.841869  167468 logs.go:282] 0 containers: []
	W1009 19:14:20.841876  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:20.841884  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:20.841897  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:20.902143  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:20.893674   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.894242   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.895810   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.896216   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.898541   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:20.893674   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.894242   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.895810   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.896216   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:20.898541   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:20.902156  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:20.902168  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:20.963057  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:20.963081  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:20.994033  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:20.994052  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:21.059710  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:21.059732  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:23.573543  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:23.585055  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:23.585120  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:23.611298  167468 cri.go:89] found id: ""
	I1009 19:14:23.611316  167468 logs.go:282] 0 containers: []
	W1009 19:14:23.611327  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:23.611334  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:23.611403  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:23.639797  167468 cri.go:89] found id: ""
	I1009 19:14:23.639813  167468 logs.go:282] 0 containers: []
	W1009 19:14:23.639822  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:23.639828  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:23.639894  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:23.667001  167468 cri.go:89] found id: ""
	I1009 19:14:23.667016  167468 logs.go:282] 0 containers: []
	W1009 19:14:23.667023  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:23.667028  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:23.667073  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:23.693487  167468 cri.go:89] found id: ""
	I1009 19:14:23.693502  167468 logs.go:282] 0 containers: []
	W1009 19:14:23.693510  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:23.693514  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:23.693565  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:23.720512  167468 cri.go:89] found id: ""
	I1009 19:14:23.720527  167468 logs.go:282] 0 containers: []
	W1009 19:14:23.720533  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:23.720538  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:23.720585  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:23.748368  167468 cri.go:89] found id: ""
	I1009 19:14:23.748408  167468 logs.go:282] 0 containers: []
	W1009 19:14:23.748418  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:23.748425  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:23.748488  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:23.776610  167468 cri.go:89] found id: ""
	I1009 19:14:23.776626  167468 logs.go:282] 0 containers: []
	W1009 19:14:23.776634  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:23.776681  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:23.776697  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:23.847110  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:23.847133  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:23.860359  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:23.860390  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:23.920518  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:23.912620   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.913240   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.914784   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.915304   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.916845   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:23.912620   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.913240   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.914784   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.915304   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:23.916845   11928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:23.920529  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:23.920541  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:23.985060  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:23.985084  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:26.518171  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:26.529182  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:26.529244  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:26.555907  167468 cri.go:89] found id: ""
	I1009 19:14:26.555925  167468 logs.go:282] 0 containers: []
	W1009 19:14:26.555936  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:26.555942  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:26.555992  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:26.583126  167468 cri.go:89] found id: ""
	I1009 19:14:26.583144  167468 logs.go:282] 0 containers: []
	W1009 19:14:26.583155  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:26.583162  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:26.583223  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:26.609859  167468 cri.go:89] found id: ""
	I1009 19:14:26.609880  167468 logs.go:282] 0 containers: []
	W1009 19:14:26.609889  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:26.609894  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:26.609949  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:26.635864  167468 cri.go:89] found id: ""
	I1009 19:14:26.635883  167468 logs.go:282] 0 containers: []
	W1009 19:14:26.635890  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:26.635895  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:26.635978  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:26.663639  167468 cri.go:89] found id: ""
	I1009 19:14:26.663658  167468 logs.go:282] 0 containers: []
	W1009 19:14:26.663664  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:26.663670  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:26.663718  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:26.690743  167468 cri.go:89] found id: ""
	I1009 19:14:26.690759  167468 logs.go:282] 0 containers: []
	W1009 19:14:26.690766  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:26.690772  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:26.690830  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:26.718602  167468 cri.go:89] found id: ""
	I1009 19:14:26.718621  167468 logs.go:282] 0 containers: []
	W1009 19:14:26.718627  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:26.718636  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:26.718646  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:26.789980  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:26.790003  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:26.802817  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:26.802837  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:26.861119  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:26.853689   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.854304   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.855781   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.856245   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.857603   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:26.853689   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.854304   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.855781   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.856245   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:26.857603   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:26.861132  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:26.861144  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:26.923808  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:26.923846  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:29.457408  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:29.468649  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:29.468701  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:29.496077  167468 cri.go:89] found id: ""
	I1009 19:14:29.496093  167468 logs.go:282] 0 containers: []
	W1009 19:14:29.496099  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:29.496105  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:29.496153  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:29.523269  167468 cri.go:89] found id: ""
	I1009 19:14:29.523286  167468 logs.go:282] 0 containers: []
	W1009 19:14:29.523294  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:29.523299  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:29.523354  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:29.551202  167468 cri.go:89] found id: ""
	I1009 19:14:29.551218  167468 logs.go:282] 0 containers: []
	W1009 19:14:29.551224  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:29.551229  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:29.551277  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:29.578618  167468 cri.go:89] found id: ""
	I1009 19:14:29.578633  167468 logs.go:282] 0 containers: []
	W1009 19:14:29.578640  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:29.578645  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:29.578699  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:29.605239  167468 cri.go:89] found id: ""
	I1009 19:14:29.605257  167468 logs.go:282] 0 containers: []
	W1009 19:14:29.605267  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:29.605273  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:29.605320  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:29.632558  167468 cri.go:89] found id: ""
	I1009 19:14:29.632581  167468 logs.go:282] 0 containers: []
	W1009 19:14:29.632589  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:29.632595  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:29.632644  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:29.660045  167468 cri.go:89] found id: ""
	I1009 19:14:29.660061  167468 logs.go:282] 0 containers: []
	W1009 19:14:29.660067  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:29.660076  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:29.660087  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:29.689848  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:29.689866  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:29.759204  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:29.759227  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:29.771334  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:29.771352  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:29.830651  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:29.823435   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.824026   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.825599   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.826136   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.827250   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:29.823435   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.824026   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.825599   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.826136   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:29.827250   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:29.830667  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:29.830678  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:32.393048  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:32.405075  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:32.405143  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:32.434099  167468 cri.go:89] found id: ""
	I1009 19:14:32.434119  167468 logs.go:282] 0 containers: []
	W1009 19:14:32.434136  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:32.434141  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:32.434199  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:32.461266  167468 cri.go:89] found id: ""
	I1009 19:14:32.461294  167468 logs.go:282] 0 containers: []
	W1009 19:14:32.461304  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:32.461310  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:32.461361  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:32.488620  167468 cri.go:89] found id: ""
	I1009 19:14:32.488636  167468 logs.go:282] 0 containers: []
	W1009 19:14:32.488644  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:32.488649  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:32.488696  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:32.515907  167468 cri.go:89] found id: ""
	I1009 19:14:32.515924  167468 logs.go:282] 0 containers: []
	W1009 19:14:32.515931  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:32.515936  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:32.515984  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:32.543671  167468 cri.go:89] found id: ""
	I1009 19:14:32.543690  167468 logs.go:282] 0 containers: []
	W1009 19:14:32.543697  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:32.543703  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:32.543751  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:32.571189  167468 cri.go:89] found id: ""
	I1009 19:14:32.571205  167468 logs.go:282] 0 containers: []
	W1009 19:14:32.571211  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:32.571216  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:32.571261  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:32.598521  167468 cri.go:89] found id: ""
	I1009 19:14:32.598539  167468 logs.go:282] 0 containers: []
	W1009 19:14:32.598546  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:32.598554  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:32.598565  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:32.663582  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:32.663609  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:32.675873  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:32.675891  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:32.735973  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:32.728326   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.728914   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.730601   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.731110   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.732693   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:32.728326   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.728914   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.730601   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.731110   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:32.732693   12302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:32.735984  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:32.735995  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:32.799860  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:32.799882  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:35.330659  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:35.341858  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:35.341908  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:35.369356  167468 cri.go:89] found id: ""
	I1009 19:14:35.369371  167468 logs.go:282] 0 containers: []
	W1009 19:14:35.369396  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:35.369403  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:35.369454  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:35.397530  167468 cri.go:89] found id: ""
	I1009 19:14:35.397549  167468 logs.go:282] 0 containers: []
	W1009 19:14:35.397556  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:35.397561  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:35.397613  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:35.425543  167468 cri.go:89] found id: ""
	I1009 19:14:35.425565  167468 logs.go:282] 0 containers: []
	W1009 19:14:35.425572  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:35.425577  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:35.425629  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:35.451820  167468 cri.go:89] found id: ""
	I1009 19:14:35.451912  167468 logs.go:282] 0 containers: []
	W1009 19:14:35.451924  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:35.451932  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:35.452003  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:35.479131  167468 cri.go:89] found id: ""
	I1009 19:14:35.479149  167468 logs.go:282] 0 containers: []
	W1009 19:14:35.479158  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:35.479165  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:35.479226  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:35.507763  167468 cri.go:89] found id: ""
	I1009 19:14:35.507793  167468 logs.go:282] 0 containers: []
	W1009 19:14:35.507802  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:35.507807  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:35.507856  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:35.536306  167468 cri.go:89] found id: ""
	I1009 19:14:35.536323  167468 logs.go:282] 0 containers: []
	W1009 19:14:35.536329  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:35.536337  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:35.536348  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:35.602873  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:35.602895  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:35.615060  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:35.615079  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:35.674681  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:35.666563   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.667233   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.668881   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.669447   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.671017   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:35.666563   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.667233   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.668881   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.669447   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:35.671017   12418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:35.674694  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:35.674705  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:35.738408  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:35.738431  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:38.270303  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:38.281687  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:38.281748  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:38.309100  167468 cri.go:89] found id: ""
	I1009 19:14:38.309115  167468 logs.go:282] 0 containers: []
	W1009 19:14:38.309121  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:38.309127  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:38.309175  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:38.337672  167468 cri.go:89] found id: ""
	I1009 19:14:38.337689  167468 logs.go:282] 0 containers: []
	W1009 19:14:38.337697  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:38.337702  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:38.337757  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:38.366315  167468 cri.go:89] found id: ""
	I1009 19:14:38.366331  167468 logs.go:282] 0 containers: []
	W1009 19:14:38.366338  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:38.366343  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:38.366410  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:38.394168  167468 cri.go:89] found id: ""
	I1009 19:14:38.394184  167468 logs.go:282] 0 containers: []
	W1009 19:14:38.394191  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:38.394195  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:38.394249  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:38.422647  167468 cri.go:89] found id: ""
	I1009 19:14:38.422667  167468 logs.go:282] 0 containers: []
	W1009 19:14:38.422678  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:38.422685  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:38.422772  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:38.452008  167468 cri.go:89] found id: ""
	I1009 19:14:38.452026  167468 logs.go:282] 0 containers: []
	W1009 19:14:38.452033  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:38.452038  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:38.452099  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:38.480564  167468 cri.go:89] found id: ""
	I1009 19:14:38.480586  167468 logs.go:282] 0 containers: []
	W1009 19:14:38.480597  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:38.480607  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:38.480624  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:38.547918  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:38.547950  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:38.559951  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:38.559971  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:38.618131  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:38.610538   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.611169   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.612854   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.613360   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.614757   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:38.610538   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.611169   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.612854   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.613360   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:38.614757   12552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:38.618142  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:38.618153  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:38.682619  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:38.682643  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:41.214700  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:41.225692  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:41.225744  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:41.252521  167468 cri.go:89] found id: ""
	I1009 19:14:41.252537  167468 logs.go:282] 0 containers: []
	W1009 19:14:41.252543  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:41.252548  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:41.252598  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:41.280073  167468 cri.go:89] found id: ""
	I1009 19:14:41.280090  167468 logs.go:282] 0 containers: []
	W1009 19:14:41.280095  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:41.280100  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:41.280147  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:41.307469  167468 cri.go:89] found id: ""
	I1009 19:14:41.307490  167468 logs.go:282] 0 containers: []
	W1009 19:14:41.307499  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:41.307505  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:41.307554  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:41.334966  167468 cri.go:89] found id: ""
	I1009 19:14:41.334982  167468 logs.go:282] 0 containers: []
	W1009 19:14:41.334991  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:41.334998  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:41.335060  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:41.362582  167468 cri.go:89] found id: ""
	I1009 19:14:41.362600  167468 logs.go:282] 0 containers: []
	W1009 19:14:41.362607  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:41.362612  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:41.362667  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:41.390351  167468 cri.go:89] found id: ""
	I1009 19:14:41.390369  167468 logs.go:282] 0 containers: []
	W1009 19:14:41.390390  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:41.390397  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:41.390453  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:41.417390  167468 cri.go:89] found id: ""
	I1009 19:14:41.417410  167468 logs.go:282] 0 containers: []
	W1009 19:14:41.417418  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:41.417428  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:41.417438  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:41.484701  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:41.484724  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:41.497051  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:41.497068  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:41.555902  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:41.548817   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.549403   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.550938   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.551329   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.552636   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:41.548817   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.549403   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.550938   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.551329   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:41.552636   12678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:41.555915  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:41.555927  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:41.618927  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:41.618950  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:44.151566  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:44.162952  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:44.163024  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:44.188939  167468 cri.go:89] found id: ""
	I1009 19:14:44.188954  167468 logs.go:282] 0 containers: []
	W1009 19:14:44.188962  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:44.188969  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:44.189053  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:44.216484  167468 cri.go:89] found id: ""
	I1009 19:14:44.216504  167468 logs.go:282] 0 containers: []
	W1009 19:14:44.216514  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:44.216520  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:44.216575  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:44.244062  167468 cri.go:89] found id: ""
	I1009 19:14:44.244079  167468 logs.go:282] 0 containers: []
	W1009 19:14:44.244089  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:44.244096  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:44.244164  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:44.272014  167468 cri.go:89] found id: ""
	I1009 19:14:44.272031  167468 logs.go:282] 0 containers: []
	W1009 19:14:44.272040  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:44.272047  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:44.272099  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:44.298566  167468 cri.go:89] found id: ""
	I1009 19:14:44.298584  167468 logs.go:282] 0 containers: []
	W1009 19:14:44.298598  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:44.298605  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:44.298666  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:44.327273  167468 cri.go:89] found id: ""
	I1009 19:14:44.327290  167468 logs.go:282] 0 containers: []
	W1009 19:14:44.327297  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:44.327302  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:44.327352  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:44.354325  167468 cri.go:89] found id: ""
	I1009 19:14:44.354341  167468 logs.go:282] 0 containers: []
	W1009 19:14:44.354347  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:44.354356  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:44.354367  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:44.413429  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:44.405599   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.406160   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.407858   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.408392   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.409925   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:44.405599   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.406160   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.407858   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.408392   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:44.409925   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:44.413442  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:44.413453  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:44.473888  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:44.473911  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:44.506171  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:44.506189  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:44.572347  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:44.572369  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:47.086686  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:47.098491  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:47.098552  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:47.129087  167468 cri.go:89] found id: ""
	I1009 19:14:47.129104  167468 logs.go:282] 0 containers: []
	W1009 19:14:47.129111  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:47.129116  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:47.129163  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:47.157143  167468 cri.go:89] found id: ""
	I1009 19:14:47.157161  167468 logs.go:282] 0 containers: []
	W1009 19:14:47.157167  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:47.157172  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:47.157223  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:47.184337  167468 cri.go:89] found id: ""
	I1009 19:14:47.184352  167468 logs.go:282] 0 containers: []
	W1009 19:14:47.184358  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:47.184365  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:47.184429  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:47.213264  167468 cri.go:89] found id: ""
	I1009 19:14:47.213280  167468 logs.go:282] 0 containers: []
	W1009 19:14:47.213291  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:47.213298  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:47.213356  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:47.240766  167468 cri.go:89] found id: ""
	I1009 19:14:47.240786  167468 logs.go:282] 0 containers: []
	W1009 19:14:47.240793  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:47.240798  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:47.240847  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:47.267656  167468 cri.go:89] found id: ""
	I1009 19:14:47.267675  167468 logs.go:282] 0 containers: []
	W1009 19:14:47.267686  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:47.267692  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:47.267760  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:47.297799  167468 cri.go:89] found id: ""
	I1009 19:14:47.297817  167468 logs.go:282] 0 containers: []
	W1009 19:14:47.297826  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:47.297837  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:47.297848  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:47.328303  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:47.328319  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:47.398644  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:47.398668  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:47.411075  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:47.411098  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:47.470237  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:47.462608   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.463190   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.464787   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.465180   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.466459   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:47.462608   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.463190   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.464787   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.465180   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:47.466459   12938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:47.470247  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:47.470260  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:50.035757  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:50.047268  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:50.047318  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:50.074626  167468 cri.go:89] found id: ""
	I1009 19:14:50.074644  167468 logs.go:282] 0 containers: []
	W1009 19:14:50.074653  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:50.074659  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:50.074726  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:50.101587  167468 cri.go:89] found id: ""
	I1009 19:14:50.101606  167468 logs.go:282] 0 containers: []
	W1009 19:14:50.101616  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:50.101622  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:50.101689  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:50.128912  167468 cri.go:89] found id: ""
	I1009 19:14:50.128964  167468 logs.go:282] 0 containers: []
	W1009 19:14:50.128983  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:50.128992  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:50.129079  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:50.157233  167468 cri.go:89] found id: ""
	I1009 19:14:50.157253  167468 logs.go:282] 0 containers: []
	W1009 19:14:50.157261  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:50.157266  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:50.157319  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:50.185689  167468 cri.go:89] found id: ""
	I1009 19:14:50.185716  167468 logs.go:282] 0 containers: []
	W1009 19:14:50.185725  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:50.185731  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:50.185792  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:50.213094  167468 cri.go:89] found id: ""
	I1009 19:14:50.213111  167468 logs.go:282] 0 containers: []
	W1009 19:14:50.213120  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:50.213128  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:50.213182  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:50.241332  167468 cri.go:89] found id: ""
	I1009 19:14:50.241348  167468 logs.go:282] 0 containers: []
	W1009 19:14:50.241355  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:50.241364  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:50.241393  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:50.302370  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:50.293815   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.294883   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.296524   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.296998   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.298663   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:50.293815   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.294883   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.296524   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.296998   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:50.298663   13042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:50.302398  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:50.302412  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:50.365923  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:50.365946  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:50.396814  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:50.396831  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:50.465484  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:50.465506  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:52.979572  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:52.990584  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:52.990647  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:53.017772  167468 cri.go:89] found id: ""
	I1009 19:14:53.017788  167468 logs.go:282] 0 containers: []
	W1009 19:14:53.017795  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:53.017799  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:53.017848  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:53.043918  167468 cri.go:89] found id: ""
	I1009 19:14:53.043945  167468 logs.go:282] 0 containers: []
	W1009 19:14:53.043952  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:53.043957  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:53.044008  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:53.072767  167468 cri.go:89] found id: ""
	I1009 19:14:53.072786  167468 logs.go:282] 0 containers: []
	W1009 19:14:53.072795  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:53.072802  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:53.072854  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:53.100586  167468 cri.go:89] found id: ""
	I1009 19:14:53.100602  167468 logs.go:282] 0 containers: []
	W1009 19:14:53.100608  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:53.100613  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:53.100660  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:53.127701  167468 cri.go:89] found id: ""
	I1009 19:14:53.127720  167468 logs.go:282] 0 containers: []
	W1009 19:14:53.127727  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:53.127732  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:53.127779  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:53.155552  167468 cri.go:89] found id: ""
	I1009 19:14:53.155571  167468 logs.go:282] 0 containers: []
	W1009 19:14:53.155578  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:53.155583  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:53.155640  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:53.183112  167468 cri.go:89] found id: ""
	I1009 19:14:53.183128  167468 logs.go:282] 0 containers: []
	W1009 19:14:53.183144  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:53.183156  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:53.183171  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:53.243405  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:53.235518   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.236187   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.237791   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.238263   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.239863   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:53.235518   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.236187   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.237791   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.238263   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:53.239863   13164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:53.243416  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:53.243427  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:53.305606  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:53.305630  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:53.335326  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:53.335345  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:53.403649  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:53.403673  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:55.918864  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:55.930447  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:55.930507  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:55.957185  167468 cri.go:89] found id: ""
	I1009 19:14:55.957201  167468 logs.go:282] 0 containers: []
	W1009 19:14:55.957207  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:55.957213  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:55.957265  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:55.984214  167468 cri.go:89] found id: ""
	I1009 19:14:55.984231  167468 logs.go:282] 0 containers: []
	W1009 19:14:55.984237  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:55.984243  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:55.984307  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:56.013635  167468 cri.go:89] found id: ""
	I1009 19:14:56.013654  167468 logs.go:282] 0 containers: []
	W1009 19:14:56.013663  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:56.013671  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:56.013735  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:56.040775  167468 cri.go:89] found id: ""
	I1009 19:14:56.040792  167468 logs.go:282] 0 containers: []
	W1009 19:14:56.040798  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:56.040803  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:56.040849  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:56.066866  167468 cri.go:89] found id: ""
	I1009 19:14:56.066887  167468 logs.go:282] 0 containers: []
	W1009 19:14:56.066893  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:56.066900  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:56.066971  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:56.096224  167468 cri.go:89] found id: ""
	I1009 19:14:56.096240  167468 logs.go:282] 0 containers: []
	W1009 19:14:56.096247  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:56.096252  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:56.096300  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:56.123522  167468 cri.go:89] found id: ""
	I1009 19:14:56.123537  167468 logs.go:282] 0 containers: []
	W1009 19:14:56.123544  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:56.123552  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:56.123566  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:56.191640  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:56.191666  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:14:56.203892  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:56.203912  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:56.261630  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:56.253807   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.254343   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.256028   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.256654   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.258265   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:56.253807   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.254343   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.256028   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.256654   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:56.258265   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:56.261649  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:56.261663  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:56.326722  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:56.326745  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:58.857655  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:14:58.868964  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:14:58.869018  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:14:58.895416  167468 cri.go:89] found id: ""
	I1009 19:14:58.895434  167468 logs.go:282] 0 containers: []
	W1009 19:14:58.895441  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:14:58.895453  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:14:58.895511  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:14:58.922319  167468 cri.go:89] found id: ""
	I1009 19:14:58.922335  167468 logs.go:282] 0 containers: []
	W1009 19:14:58.922343  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:14:58.922348  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:14:58.922416  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:14:58.949902  167468 cri.go:89] found id: ""
	I1009 19:14:58.949918  167468 logs.go:282] 0 containers: []
	W1009 19:14:58.949925  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:14:58.949930  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:14:58.949978  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:14:58.978366  167468 cri.go:89] found id: ""
	I1009 19:14:58.978402  167468 logs.go:282] 0 containers: []
	W1009 19:14:58.978412  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:14:58.978418  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:14:58.978481  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:14:59.004783  167468 cri.go:89] found id: ""
	I1009 19:14:59.004802  167468 logs.go:282] 0 containers: []
	W1009 19:14:59.004812  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:14:59.004818  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:14:59.004875  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:14:59.031676  167468 cri.go:89] found id: ""
	I1009 19:14:59.031692  167468 logs.go:282] 0 containers: []
	W1009 19:14:59.031699  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:14:59.031704  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:14:59.031764  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:14:59.058880  167468 cri.go:89] found id: ""
	I1009 19:14:59.058899  167468 logs.go:282] 0 containers: []
	W1009 19:14:59.058909  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:14:59.058920  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:14:59.058933  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:14:59.117247  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:14:59.109634   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.110238   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.111830   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.112295   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.113884   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:14:59.109634   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.110238   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.111830   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.112295   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:14:59.113884   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:14:59.117261  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:14:59.117273  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:14:59.181757  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:14:59.181781  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:14:59.211839  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:14:59.211857  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:14:59.278338  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:14:59.278360  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:01.792200  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:01.803290  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:15:01.803341  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:15:01.830551  167468 cri.go:89] found id: ""
	I1009 19:15:01.830568  167468 logs.go:282] 0 containers: []
	W1009 19:15:01.830577  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:15:01.830584  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:15:01.830632  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:15:01.858835  167468 cri.go:89] found id: ""
	I1009 19:15:01.858853  167468 logs.go:282] 0 containers: []
	W1009 19:15:01.858859  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:15:01.858864  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:15:01.858910  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:15:01.885090  167468 cri.go:89] found id: ""
	I1009 19:15:01.885111  167468 logs.go:282] 0 containers: []
	W1009 19:15:01.885120  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:15:01.885127  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:15:01.885175  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:15:01.911802  167468 cri.go:89] found id: ""
	I1009 19:15:01.911819  167468 logs.go:282] 0 containers: []
	W1009 19:15:01.911827  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:15:01.911832  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:15:01.911880  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:15:01.938892  167468 cri.go:89] found id: ""
	I1009 19:15:01.938909  167468 logs.go:282] 0 containers: []
	W1009 19:15:01.938916  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:15:01.938927  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:15:01.938977  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:15:01.966243  167468 cri.go:89] found id: ""
	I1009 19:15:01.966259  167468 logs.go:282] 0 containers: []
	W1009 19:15:01.966265  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:15:01.966270  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:15:01.966320  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:15:01.993984  167468 cri.go:89] found id: ""
	I1009 19:15:01.994000  167468 logs.go:282] 0 containers: []
	W1009 19:15:01.994023  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:15:01.994032  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:15:01.994044  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:02.006125  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:15:02.006144  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:15:02.064780  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:15:02.057286   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.057806   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.059460   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.059974   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.061129   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:15:02.057286   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.057806   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.059460   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.059974   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:02.061129   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:15:02.064797  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:15:02.064810  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:15:02.134945  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:15:02.134968  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:15:02.165969  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:15:02.165989  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:15:04.734526  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:04.746112  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:15:04.746199  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:15:04.773650  167468 cri.go:89] found id: ""
	I1009 19:15:04.773669  167468 logs.go:282] 0 containers: []
	W1009 19:15:04.773680  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:15:04.773687  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:15:04.773748  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:15:04.800778  167468 cri.go:89] found id: ""
	I1009 19:15:04.800795  167468 logs.go:282] 0 containers: []
	W1009 19:15:04.800802  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:15:04.800807  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:15:04.800854  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:15:04.828717  167468 cri.go:89] found id: ""
	I1009 19:15:04.828734  167468 logs.go:282] 0 containers: []
	W1009 19:15:04.828741  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:15:04.828746  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:15:04.828809  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:15:04.856797  167468 cri.go:89] found id: ""
	I1009 19:15:04.856814  167468 logs.go:282] 0 containers: []
	W1009 19:15:04.856821  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:15:04.856826  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:15:04.856885  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:15:04.884077  167468 cri.go:89] found id: ""
	I1009 19:15:04.884099  167468 logs.go:282] 0 containers: []
	W1009 19:15:04.884110  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:15:04.884116  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:15:04.884164  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:15:04.911551  167468 cri.go:89] found id: ""
	I1009 19:15:04.911571  167468 logs.go:282] 0 containers: []
	W1009 19:15:04.911581  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:15:04.911588  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:15:04.911641  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:15:04.939637  167468 cri.go:89] found id: ""
	I1009 19:15:04.939656  167468 logs.go:282] 0 containers: []
	W1009 19:15:04.939665  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:15:04.939676  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:15:04.939691  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:15:05.000397  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:15:04.992804   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.993434   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.995032   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.995550   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.997065   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:15:04.992804   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.993434   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.995032   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.995550   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:04.997065   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:15:05.000414  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:15:05.000427  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:15:05.062558  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:15:05.062582  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:15:05.095113  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:15:05.095134  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:15:05.167688  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:15:05.167712  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:07.681917  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:07.692856  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:15:07.692912  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:15:07.720408  167468 cri.go:89] found id: ""
	I1009 19:15:07.720425  167468 logs.go:282] 0 containers: []
	W1009 19:15:07.720431  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:15:07.720436  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:15:07.720485  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:15:07.748034  167468 cri.go:89] found id: ""
	I1009 19:15:07.748055  167468 logs.go:282] 0 containers: []
	W1009 19:15:07.748064  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:15:07.748070  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:15:07.748124  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:15:07.775843  167468 cri.go:89] found id: ""
	I1009 19:15:07.775858  167468 logs.go:282] 0 containers: []
	W1009 19:15:07.775865  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:15:07.775870  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:15:07.775930  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:15:07.803455  167468 cri.go:89] found id: ""
	I1009 19:15:07.803475  167468 logs.go:282] 0 containers: []
	W1009 19:15:07.803485  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:15:07.803492  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:15:07.803543  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:15:07.831128  167468 cri.go:89] found id: ""
	I1009 19:15:07.831145  167468 logs.go:282] 0 containers: []
	W1009 19:15:07.831152  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:15:07.831157  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:15:07.831207  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:15:07.858576  167468 cri.go:89] found id: ""
	I1009 19:15:07.858594  167468 logs.go:282] 0 containers: []
	W1009 19:15:07.858601  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:15:07.858606  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:15:07.858655  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:15:07.885114  167468 cri.go:89] found id: ""
	I1009 19:15:07.885130  167468 logs.go:282] 0 containers: []
	W1009 19:15:07.885136  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:15:07.885144  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:15:07.885154  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:15:07.951050  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:15:07.951073  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:07.963260  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:15:07.963277  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:15:08.024291  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:15:08.016184   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.016764   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.018467   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.018939   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.020486   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:15:08.016184   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.016764   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.018467   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.018939   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:08.020486   13786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:15:08.024308  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:15:08.024321  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:15:08.089308  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:15:08.089331  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:15:10.619798  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:10.631039  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:15:10.631095  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:15:10.658697  167468 cri.go:89] found id: ""
	I1009 19:15:10.658713  167468 logs.go:282] 0 containers: []
	W1009 19:15:10.658720  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:15:10.658728  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:15:10.658784  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:15:10.687176  167468 cri.go:89] found id: ""
	I1009 19:15:10.687195  167468 logs.go:282] 0 containers: []
	W1009 19:15:10.687203  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:15:10.687215  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:15:10.687274  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:15:10.714831  167468 cri.go:89] found id: ""
	I1009 19:15:10.714848  167468 logs.go:282] 0 containers: []
	W1009 19:15:10.714854  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:15:10.714859  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:15:10.714907  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:15:10.742110  167468 cri.go:89] found id: ""
	I1009 19:15:10.742128  167468 logs.go:282] 0 containers: []
	W1009 19:15:10.742135  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:15:10.742142  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:15:10.742191  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:15:10.770141  167468 cri.go:89] found id: ""
	I1009 19:15:10.770157  167468 logs.go:282] 0 containers: []
	W1009 19:15:10.770163  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:15:10.770169  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:15:10.770216  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:15:10.797767  167468 cri.go:89] found id: ""
	I1009 19:15:10.797787  167468 logs.go:282] 0 containers: []
	W1009 19:15:10.797797  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:15:10.797803  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:15:10.797857  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:15:10.825395  167468 cri.go:89] found id: ""
	I1009 19:15:10.825415  167468 logs.go:282] 0 containers: []
	W1009 19:15:10.825425  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:15:10.825436  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:15:10.825456  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:15:10.884784  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:15:10.877121   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.877714   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.879474   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.879980   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.881232   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:15:10.877121   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.877714   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.879474   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.879980   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:10.881232   13895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:15:10.884798  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:15:10.884812  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:15:10.949429  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:15:10.949455  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:15:10.980207  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:15:10.980224  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:15:11.045524  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:15:11.045548  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:13.559802  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:13.571007  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:15:13.571059  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:15:13.597407  167468 cri.go:89] found id: ""
	I1009 19:15:13.597424  167468 logs.go:282] 0 containers: []
	W1009 19:15:13.597430  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:15:13.597435  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:15:13.597489  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:15:13.623563  167468 cri.go:89] found id: ""
	I1009 19:15:13.623583  167468 logs.go:282] 0 containers: []
	W1009 19:15:13.623593  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:15:13.623600  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:15:13.623658  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:15:13.649574  167468 cri.go:89] found id: ""
	I1009 19:15:13.649597  167468 logs.go:282] 0 containers: []
	W1009 19:15:13.649606  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:15:13.649611  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:15:13.649660  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:15:13.677161  167468 cri.go:89] found id: ""
	I1009 19:15:13.677176  167468 logs.go:282] 0 containers: []
	W1009 19:15:13.677183  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:15:13.677187  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:15:13.677235  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:15:13.705296  167468 cri.go:89] found id: ""
	I1009 19:15:13.705311  167468 logs.go:282] 0 containers: []
	W1009 19:15:13.705317  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:15:13.705322  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:15:13.705368  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:15:13.732914  167468 cri.go:89] found id: ""
	I1009 19:15:13.732932  167468 logs.go:282] 0 containers: []
	W1009 19:15:13.732955  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:15:13.732961  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:15:13.733033  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:15:13.759867  167468 cri.go:89] found id: ""
	I1009 19:15:13.759883  167468 logs.go:282] 0 containers: []
	W1009 19:15:13.759890  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:15:13.759899  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:15:13.759908  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:15:13.823220  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:15:13.823246  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:15:13.853281  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:15:13.853303  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:15:13.923620  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:15:13.923644  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:13.936705  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:15:13.936724  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:15:13.996614  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:15:13.989060   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.989714   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.991209   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.991732   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.992915   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:15:13.989060   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.989714   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.991209   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.991732   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:13.992915   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:15:16.498568  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:16.509972  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:15:16.510034  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:15:16.537700  167468 cri.go:89] found id: ""
	I1009 19:15:16.537721  167468 logs.go:282] 0 containers: []
	W1009 19:15:16.537732  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:15:16.537739  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:15:16.537913  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:15:16.565255  167468 cri.go:89] found id: ""
	I1009 19:15:16.565271  167468 logs.go:282] 0 containers: []
	W1009 19:15:16.565277  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:15:16.565282  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:15:16.565328  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:15:16.594281  167468 cri.go:89] found id: ""
	I1009 19:15:16.594297  167468 logs.go:282] 0 containers: []
	W1009 19:15:16.594304  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:15:16.594309  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:15:16.594368  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:15:16.621490  167468 cri.go:89] found id: ""
	I1009 19:15:16.621508  167468 logs.go:282] 0 containers: []
	W1009 19:15:16.621515  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:15:16.621529  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:15:16.621581  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:15:16.650311  167468 cri.go:89] found id: ""
	I1009 19:15:16.650328  167468 logs.go:282] 0 containers: []
	W1009 19:15:16.650336  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:15:16.650343  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:15:16.650419  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:15:16.679567  167468 cri.go:89] found id: ""
	I1009 19:15:16.679587  167468 logs.go:282] 0 containers: []
	W1009 19:15:16.679595  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:15:16.679602  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:15:16.679650  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:15:16.708807  167468 cri.go:89] found id: ""
	I1009 19:15:16.708823  167468 logs.go:282] 0 containers: []
	W1009 19:15:16.708829  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:15:16.708839  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:15:16.708853  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:15:16.769188  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:15:16.769215  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:15:16.800501  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:15:16.800522  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:15:16.866546  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:15:16.866569  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:16.879721  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:15:16.879740  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:15:16.940801  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:15:16.932610   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.933242   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.935038   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.935548   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.937177   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:15:16.932610   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.933242   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.935038   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.935548   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:16.937177   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:15:19.441719  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:19.452865  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:15:19.453106  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:15:19.480919  167468 cri.go:89] found id: ""
	I1009 19:15:19.480970  167468 logs.go:282] 0 containers: []
	W1009 19:15:19.480980  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:15:19.480986  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:15:19.481049  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:15:19.508412  167468 cri.go:89] found id: ""
	I1009 19:15:19.508428  167468 logs.go:282] 0 containers: []
	W1009 19:15:19.508435  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:15:19.508439  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:15:19.508505  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:15:19.535889  167468 cri.go:89] found id: ""
	I1009 19:15:19.535906  167468 logs.go:282] 0 containers: []
	W1009 19:15:19.535912  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:15:19.535919  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:15:19.535972  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:15:19.562894  167468 cri.go:89] found id: ""
	I1009 19:15:19.562910  167468 logs.go:282] 0 containers: []
	W1009 19:15:19.562916  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:15:19.562923  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:15:19.562982  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:15:19.590804  167468 cri.go:89] found id: ""
	I1009 19:15:19.590820  167468 logs.go:282] 0 containers: []
	W1009 19:15:19.590829  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:15:19.590837  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:15:19.590911  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:15:19.618341  167468 cri.go:89] found id: ""
	I1009 19:15:19.618356  167468 logs.go:282] 0 containers: []
	W1009 19:15:19.618362  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:15:19.618367  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:15:19.618440  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:15:19.646546  167468 cri.go:89] found id: ""
	I1009 19:15:19.646567  167468 logs.go:282] 0 containers: []
	W1009 19:15:19.646573  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:15:19.646581  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:15:19.646595  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:15:19.715578  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:15:19.715601  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:15:19.727811  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:15:19.727831  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:15:19.788607  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:15:19.780608   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.781186   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.782870   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.783356   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.784919   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:15:19.780608   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.781186   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.782870   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.783356   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:15:19.784919   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:15:19.788631  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:15:19.788647  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:15:19.847876  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:15:19.847900  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:15:22.381584  167468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:15:22.392889  167468 kubeadm.go:601] duration metric: took 4m4.348960089s to restartPrimaryControlPlane
	W1009 19:15:22.392982  167468 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 19:15:22.393529  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:15:22.850885  167468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:15:22.864335  167468 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:15:22.873145  167468 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:15:22.873189  167468 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:15:22.881423  167468 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:15:22.881441  167468 kubeadm.go:157] found existing configuration files:
	
	I1009 19:15:22.881497  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 19:15:22.889858  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:15:22.889971  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:15:22.897974  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 19:15:22.906291  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:15:22.906340  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:15:22.914415  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 19:15:22.922536  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:15:22.922599  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:15:22.931121  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 19:15:22.939993  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:15:22.940039  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:15:22.948051  167468 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:15:22.986697  167468 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:15:22.986748  167468 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:15:23.008875  167468 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:15:23.008934  167468 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:15:23.008988  167468 kubeadm.go:318] OS: Linux
	I1009 19:15:23.009036  167468 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:15:23.009103  167468 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:15:23.009177  167468 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:15:23.009236  167468 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:15:23.009299  167468 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:15:23.009395  167468 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:15:23.009455  167468 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:15:23.009494  167468 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:15:23.074858  167468 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:15:23.074976  167468 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:15:23.075090  167468 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:15:23.082442  167468 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:15:23.086775  167468 out.go:252]   - Generating certificates and keys ...
	I1009 19:15:23.086906  167468 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:15:23.086998  167468 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:15:23.087108  167468 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:15:23.087219  167468 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:15:23.087316  167468 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:15:23.087390  167468 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:15:23.087481  167468 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:15:23.087562  167468 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:15:23.087646  167468 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:15:23.087719  167468 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:15:23.087760  167468 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:15:23.087822  167468 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:15:23.221125  167468 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:15:23.460801  167468 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:15:23.654451  167468 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:15:24.356245  167468 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:15:24.473269  167468 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:15:24.473898  167468 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:15:24.476149  167468 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:15:24.477738  167468 out.go:252]   - Booting up control plane ...
	I1009 19:15:24.477865  167468 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:15:24.477931  167468 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:15:24.478446  167468 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:15:24.492764  167468 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:15:24.492874  167468 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:15:24.499467  167468 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:15:24.499575  167468 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:15:24.499618  167468 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:15:24.605084  167468 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:15:24.605222  167468 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:15:25.606067  167468 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001072895s
	I1009 19:15:25.610397  167468 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:15:25.610526  167468 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 19:15:25.610654  167468 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:15:25.610769  167468 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:19:25.611835  167468 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000416121s
	I1009 19:19:25.611992  167468 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000591031s
	I1009 19:19:25.612097  167468 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000888179s
	I1009 19:19:25.612103  167468 kubeadm.go:318] 
	I1009 19:19:25.612253  167468 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:19:25.612445  167468 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:19:25.612656  167468 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:19:25.612825  167468 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:19:25.612930  167468 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:19:25.613139  167468 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:19:25.613162  167468 kubeadm.go:318] 
	I1009 19:19:25.616947  167468 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:19:25.617060  167468 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:19:25.617572  167468 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 19:19:25.617651  167468 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:19:25.617804  167468 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001072895s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000416121s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000591031s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000888179s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:19:25.617887  167468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:19:26.066027  167468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:19:26.078995  167468 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:19:26.079043  167468 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:19:26.087404  167468 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:19:26.087421  167468 kubeadm.go:157] found existing configuration files:
	
	I1009 19:19:26.087474  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 19:19:26.095518  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:19:26.095582  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:19:26.103154  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 19:19:26.111105  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:19:26.111146  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:19:26.119058  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 19:19:26.127484  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:19:26.127537  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:19:26.135357  167468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 19:19:26.143254  167468 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:19:26.143297  167468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:19:26.151189  167468 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:19:26.210779  167468 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:19:26.274405  167468 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:23:28.750127  167468 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 19:23:28.750319  167468 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:23:28.753500  167468 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:23:28.753545  167468 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:23:28.753617  167468 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:23:28.753661  167468 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:23:28.753718  167468 kubeadm.go:318] OS: Linux
	I1009 19:23:28.753755  167468 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:23:28.753798  167468 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:23:28.753837  167468 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:23:28.753879  167468 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:23:28.753920  167468 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:23:28.753966  167468 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:23:28.754009  167468 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:23:28.754044  167468 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:23:28.754106  167468 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:23:28.754188  167468 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:23:28.754294  167468 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:23:28.754356  167468 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:23:28.761169  167468 out.go:252]   - Generating certificates and keys ...
	I1009 19:23:28.761262  167468 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:23:28.761315  167468 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:23:28.761440  167468 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:23:28.761501  167468 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:23:28.761579  167468 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:23:28.761622  167468 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:23:28.761682  167468 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:23:28.761749  167468 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:23:28.761806  167468 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:23:28.761871  167468 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:23:28.761900  167468 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:23:28.761950  167468 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:23:28.761989  167468 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:23:28.762031  167468 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:23:28.762071  167468 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:23:28.762123  167468 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:23:28.762165  167468 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:23:28.762242  167468 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:23:28.762313  167468 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:23:28.766946  167468 out.go:252]   - Booting up control plane ...
	I1009 19:23:28.767031  167468 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:23:28.767110  167468 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:23:28.767177  167468 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:23:28.767279  167468 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:23:28.767361  167468 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:23:28.767493  167468 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:23:28.767564  167468 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:23:28.767596  167468 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:23:28.767740  167468 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:23:28.767825  167468 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:23:28.767878  167468 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001082703s
	I1009 19:23:28.767963  167468 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:23:28.768033  167468 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 19:23:28.768102  167468 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:23:28.768166  167468 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:23:28.768228  167468 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000814983s
	I1009 19:23:28.768298  167468 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001138726s
	I1009 19:23:28.768353  167468 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001237075s
	I1009 19:23:28.768355  167468 kubeadm.go:318] 
	I1009 19:23:28.768454  167468 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:23:28.768516  167468 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:23:28.768593  167468 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:23:28.768716  167468 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:23:28.768790  167468 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:23:28.768868  167468 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:23:28.768903  167468 kubeadm.go:318] 
	I1009 19:23:28.768957  167468 kubeadm.go:402] duration metric: took 12m10.761538861s to StartCluster
	I1009 19:23:28.769014  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:23:28.769073  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:23:28.798618  167468 cri.go:89] found id: ""
	I1009 19:23:28.798638  167468 logs.go:282] 0 containers: []
	W1009 19:23:28.798647  167468 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:23:28.798655  167468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:23:28.798723  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:23:28.826917  167468 cri.go:89] found id: ""
	I1009 19:23:28.826933  167468 logs.go:282] 0 containers: []
	W1009 19:23:28.826940  167468 logs.go:284] No container was found matching "etcd"
	I1009 19:23:28.826945  167468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:23:28.827008  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:23:28.855079  167468 cri.go:89] found id: ""
	I1009 19:23:28.855097  167468 logs.go:282] 0 containers: []
	W1009 19:23:28.855103  167468 logs.go:284] No container was found matching "coredns"
	I1009 19:23:28.855108  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:23:28.855157  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:23:28.884473  167468 cri.go:89] found id: ""
	I1009 19:23:28.884493  167468 logs.go:282] 0 containers: []
	W1009 19:23:28.884503  167468 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:23:28.884509  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:23:28.884563  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:23:28.911619  167468 cri.go:89] found id: ""
	I1009 19:23:28.911637  167468 logs.go:282] 0 containers: []
	W1009 19:23:28.911646  167468 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:23:28.911653  167468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:23:28.911729  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:23:28.940299  167468 cri.go:89] found id: ""
	I1009 19:23:28.940316  167468 logs.go:282] 0 containers: []
	W1009 19:23:28.940325  167468 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:23:28.940332  167468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:23:28.940417  167468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:23:28.967431  167468 cri.go:89] found id: ""
	I1009 19:23:28.967448  167468 logs.go:282] 0 containers: []
	W1009 19:23:28.967455  167468 logs.go:284] No container was found matching "kindnet"
	I1009 19:23:28.967464  167468 logs.go:123] Gathering logs for kubelet ...
	I1009 19:23:28.967475  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:23:29.033707  167468 logs.go:123] Gathering logs for dmesg ...
	I1009 19:23:29.033734  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:23:29.046262  167468 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:23:29.046281  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:23:29.107779  167468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:23:29.100355   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.100974   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.102094   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.102502   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.104088   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:23:29.100355   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.100974   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.102094   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.102502   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:29.104088   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:23:29.107791  167468 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:23:29.107803  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:23:29.172081  167468 logs.go:123] Gathering logs for container status ...
	I1009 19:23:29.172106  167468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:23:29.202987  167468 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001082703s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000814983s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001138726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001237075s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:23:29.203031  167468 out.go:285] * 
	W1009 19:23:29.203144  167468 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001082703s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000814983s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001138726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001237075s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:23:29.203160  167468 out.go:285] * 
	W1009 19:23:29.204930  167468 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:23:29.208458  167468 out.go:203] 
	W1009 19:23:29.209891  167468 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001082703s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000814983s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001138726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001237075s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:23:29.209916  167468 out.go:285] * 
	I1009 19:23:29.211473  167468 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:23:21 functional-158523 crio[5871]: time="2025-10-09T19:23:21.383890963Z" level=info msg="createCtr: removing container 52759f352f3bc676ab5b49a07a9d85f567d2e7279dd6e66b537befb9c34b9563" id=6bc74866-bd1a-4fe3-b2fe-ab4f48ef66c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:21 functional-158523 crio[5871]: time="2025-10-09T19:23:21.383925125Z" level=info msg="createCtr: deleting container 52759f352f3bc676ab5b49a07a9d85f567d2e7279dd6e66b537befb9c34b9563 from storage" id=6bc74866-bd1a-4fe3-b2fe-ab4f48ef66c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:21 functional-158523 crio[5871]: time="2025-10-09T19:23:21.38602149Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-158523_kube-system_8f4f9df5924bbaa4e1ec7f60e6576647_0" id=6bc74866-bd1a-4fe3-b2fe-ab4f48ef66c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.361369439Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=564b8cc3-706f-4ebc-85fc-e418d1c3752d name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.362346701Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=4d15c296-f8d7-4176-9190-700d112b9572 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.363786744Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-158523/kube-controller-manager" id=9e8f884d-15aa-4c75-8b1a-d77921fb52c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.364209596Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.368303863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.368759728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.385563675Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9e8f884d-15aa-4c75-8b1a-d77921fb52c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.387017596Z" level=info msg="createCtr: deleting container ID 7903786c584f6892e4b56affb9c65eed6407c04a3870e7970134bf671afc0f1d from idIndex" id=9e8f884d-15aa-4c75-8b1a-d77921fb52c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.387061157Z" level=info msg="createCtr: removing container 7903786c584f6892e4b56affb9c65eed6407c04a3870e7970134bf671afc0f1d" id=9e8f884d-15aa-4c75-8b1a-d77921fb52c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.387095444Z" level=info msg="createCtr: deleting container 7903786c584f6892e4b56affb9c65eed6407c04a3870e7970134bf671afc0f1d from storage" id=9e8f884d-15aa-4c75-8b1a-d77921fb52c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:22 functional-158523 crio[5871]: time="2025-10-09T19:23:22.389220675Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-158523_kube-system_08a2248bbc397b9ed927890e2073120b_0" id=9e8f884d-15aa-4c75-8b1a-d77921fb52c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.361700431Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=3035940c-3eb2-4f17-9268-cf6479d33a9c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.3626609Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=cb82b414-d303-41e2-99d2-2720900c87b1 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.363666995Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-158523/kube-scheduler" id=f421fdd5-7a90-465c-a8ed-f9d66ced2939 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.363912721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.367677125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.368160014Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.385420562Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f421fdd5-7a90-465c-a8ed-f9d66ced2939 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.386768506Z" level=info msg="createCtr: deleting container ID 7ca39f0bfdca7c6677a8404742b48165f0f4969589d4ccb2467e982a6dd7797a from idIndex" id=f421fdd5-7a90-465c-a8ed-f9d66ced2939 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.386809948Z" level=info msg="createCtr: removing container 7ca39f0bfdca7c6677a8404742b48165f0f4969589d4ccb2467e982a6dd7797a" id=f421fdd5-7a90-465c-a8ed-f9d66ced2939 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.386848933Z" level=info msg="createCtr: deleting container 7ca39f0bfdca7c6677a8404742b48165f0f4969589d4ccb2467e982a6dd7797a from storage" id=f421fdd5-7a90-465c-a8ed-f9d66ced2939 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.38924825Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-158523_kube-system_589c70f36d169281ef056387fc3a74a2_0" id=f421fdd5-7a90-465c-a8ed-f9d66ced2939 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:23:32.395101   15912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:32.395686   15912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:32.397457   15912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:32.397965   15912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:32.399188   15912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:23:32 up  1:06,  0 user,  load average: 0.06, 0.08, 4.21
	Linux functional-158523 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:23:22 functional-158523 kubelet[14998]: E1009 19:23:22.360879   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:23:22 functional-158523 kubelet[14998]: E1009 19:23:22.389641   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:22 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:22 functional-158523 kubelet[14998]:  > podSandboxID="c46b8882958a3d5604399e1a44a408e9b7fbd2d13564b122e7c9bc822d9ccdf7"
	Oct 09 19:23:22 functional-158523 kubelet[14998]: E1009 19:23:22.389750   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:22 functional-158523 kubelet[14998]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-158523_kube-system(08a2248bbc397b9ed927890e2073120b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:22 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:22 functional-158523 kubelet[14998]: E1009 19:23:22.389780   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-158523" podUID="08a2248bbc397b9ed927890e2073120b"
	Oct 09 19:23:24 functional-158523 kubelet[14998]: E1009 19:23:24.984891   14998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-158523?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 19:23:25 functional-158523 kubelet[14998]: I1009 19:23:25.146305   14998 kubelet_node_status.go:75] "Attempting to register node" node="functional-158523"
	Oct 09 19:23:25 functional-158523 kubelet[14998]: E1009 19:23:25.146731   14998 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-158523"
	Oct 09 19:23:26 functional-158523 kubelet[14998]: E1009 19:23:26.361185   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:23:26 functional-158523 kubelet[14998]: E1009 19:23:26.389658   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:26 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:26 functional-158523 kubelet[14998]:  > podSandboxID="ec5fd20197d3cb2af48faa87c42dae73063f326b50e117bd23262f4dc00885b3"
	Oct 09 19:23:26 functional-158523 kubelet[14998]: E1009 19:23:26.389797   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:26 functional-158523 kubelet[14998]:         container kube-scheduler start failed in pod kube-scheduler-functional-158523_kube-system(589c70f36d169281ef056387fc3a74a2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:26 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:26 functional-158523 kubelet[14998]: E1009 19:23:26.389838   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-158523" podUID="589c70f36d169281ef056387fc3a74a2"
	Oct 09 19:23:28 functional-158523 kubelet[14998]: E1009 19:23:28.373929   14998 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-158523\" not found"
	Oct 09 19:23:29 functional-158523 kubelet[14998]: E1009 19:23:29.511170   14998 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 19:23:31 functional-158523 kubelet[14998]: E1009 19:23:31.985723   14998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-158523?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 19:23:32 functional-158523 kubelet[14998]: I1009 19:23:32.148092   14998 kubelet_node_status.go:75] "Attempting to register node" node="functional-158523"
	Oct 09 19:23:32 functional-158523 kubelet[14998]: E1009 19:23:32.148532   14998 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-158523"
	Oct 09 19:23:32 functional-158523 kubelet[14998]: E1009 19:23:32.191088   14998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-158523.186ce8d7e4fa8e80  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-158523,UID:functional-158523,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-158523 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-158523,},FirstTimestamp:2025-10-09 19:19:28.352259712 +0000 UTC m=+0.607993345,LastTimestamp:2025-10-09 19:19:28.352259712 +0000 UTC m=+0.607993345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-158523,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523: exit status 2 (311.759769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-158523" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (2.00s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-158523 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-158523 apply -f testdata/invalidsvc.yaml: exit status 1 (51.418376ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-158523 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-158523 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-158523 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-158523 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-158523 --alsologtostderr -v=1] stderr:
I1009 19:23:36.320334  183294 out.go:360] Setting OutFile to fd 1 ...
I1009 19:23:36.320562  183294 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:36.320570  183294 out.go:374] Setting ErrFile to fd 2...
I1009 19:23:36.320576  183294 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:36.320922  183294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
I1009 19:23:36.321256  183294 mustload.go:65] Loading cluster: functional-158523
I1009 19:23:36.321842  183294 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:23:36.322414  183294 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
I1009 19:23:36.346645  183294 host.go:66] Checking if "functional-158523" exists ...
I1009 19:23:36.346978  183294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1009 19:23:36.427669  183294 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:36.414291065 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1009 19:23:36.427825  183294 api_server.go:166] Checking apiserver status ...
I1009 19:23:36.427902  183294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1009 19:23:36.427965  183294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
I1009 19:23:36.451291  183294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
W1009 19:23:36.565025  183294 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1009 19:23:36.571167  183294 out.go:179] * The control-plane node functional-158523 apiserver is not running: (state=Stopped)
I1009 19:23:36.572758  183294 out.go:179]   To start a cluster, run: "minikube start -p functional-158523"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-158523
helpers_test.go:243: (dbg) docker inspect functional-158523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	        "Created": "2025-10-09T18:56:39.519997973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:56:39.555178961Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hosts",
	        "LogPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4-json.log",
	        "Name": "/functional-158523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-158523:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-158523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	                "LowerDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-158523",
	                "Source": "/var/lib/docker/volumes/functional-158523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-158523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-158523",
	                "name.minikube.sigs.k8s.io": "functional-158523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50f61b8c99f766763e9e30d37584e537ddf10f74215a382a4221cbe7f7c2e821",
	            "SandboxKey": "/var/run/docker/netns/50f61b8c99f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-158523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:a1:34:24:78:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6347eb7b6b2842e266f51d3873c8aa169a4287ea52510c3d62dc4ed41012963c",
	                    "EndpointID": "eb1ada23cbc058d52037dd94574627dd1b0fedf455f291e656f050bf06881952",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-158523",
	                        "dea3735c567c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523: exit status 2 (346.751914ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 logs -n 25
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache     │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache     │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ kubectl   │ functional-158523 kubectl -- --context functional-158523 get pods                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	│ start     │ -p functional-158523 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	│ config    │ functional-158523 config unset cpus                                                                                        │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ cp        │ functional-158523 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ service   │ functional-158523 service list                                                                                             │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ config    │ functional-158523 config get cpus                                                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ config    │ functional-158523 config set cpus 2                                                                                        │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ config    │ functional-158523 config get cpus                                                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ config    │ functional-158523 config unset cpus                                                                                        │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh -n functional-158523 sudo cat /home/docker/cp-test.txt                                               │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ config    │ functional-158523 config get cpus                                                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ service   │ functional-158523 service list -o json                                                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ start     │ -p functional-158523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ start     │ -p functional-158523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ cp        │ functional-158523 cp functional-158523:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3343028733/001/cp-test.txt │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ service   │ functional-158523 service --namespace=default --https --url hello-node                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ start     │ -p functional-158523 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ ssh       │ functional-158523 ssh -n functional-158523 sudo cat /home/docker/cp-test.txt                                               │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ service   │ functional-158523 service hello-node --url --format={{.IP}}                                                                │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-158523 --alsologtostderr -v=1                                                             │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ service   │ functional-158523 service hello-node --url                                                                                 │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ cp        │ functional-158523 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh -n functional-158523 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:23:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:23:36.037175  182999 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:23:36.037778  182999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:36.037796  182999 out.go:374] Setting ErrFile to fd 2...
	I1009 19:23:36.037812  182999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:36.038132  182999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:23:36.038788  182999 out.go:368] Setting JSON to false
	I1009 19:23:36.039721  182999 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3965,"bootTime":1760033851,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:23:36.039834  182999 start.go:143] virtualization: kvm guest
	I1009 19:23:36.041723  182999 out.go:179] * [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:23:36.043370  182999 notify.go:221] Checking for updates...
	I1009 19:23:36.043393  182999 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:23:36.044914  182999 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:23:36.046316  182999 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:23:36.050105  182999 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:23:36.051500  182999 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:23:36.052919  182999 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:23:36.054742  182999 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:23:36.055550  182999 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:23:36.081935  182999 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:23:36.082095  182999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:23:36.148703  182999 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:36.138476174 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:23:36.148869  182999 docker.go:319] overlay module found
	I1009 19:23:36.150853  182999 out.go:179] * Using the docker driver based on existing profile
	I1009 19:23:36.152205  182999 start.go:309] selected driver: docker
	I1009 19:23:36.152222  182999 start.go:930] validating driver "docker" against &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:23:36.152322  182999 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:23:36.152439  182999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:23:36.242576  182999 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:36.229646073 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:23:36.243640  182999 cni.go:84] Creating CNI manager for ""
	I1009 19:23:36.243714  182999 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:23:36.243784  182999 start.go:353] cluster config:
	{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:23:36.246434  182999 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.391642132Z" level=info msg="createCtr: deleting container 48e7d06964617ecb6465098a4e02d6e62f6de72bec3b6d68067bb7185b5532ad from storage" id=e8fa8460-2b29-4b93-a688-665bd082facc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.393554439Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-158523_kube-system_8f4f9df5924bbaa4e1ec7f60e6576647_0" id=a361bc75-f795-41e7-b4ca-66571bdf9d8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.393955241Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-158523_kube-system_8b0d1e7a228bb11c7e5ac0baa08c68e2_0" id=e8fa8460-2b29-4b93-a688-665bd082facc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.362795549Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=458e9ac2-8e82-42ad-9ed8-0176b0506eba name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.363449782Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=8931ed4c-df27-484a-9356-5e1a89e73ba0 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.364207793Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=710da863-06db-486d-b4c9-80a482d2e979 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.364942224Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1817f978-3968-47f0-815e-0370c8ea5da4 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.365297945Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-158523/kube-scheduler" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.365592193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.366507203Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-158523/kube-controller-manager" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.366773782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.371011337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.371617755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.375896811Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.376619155Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.394891438Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.396523636Z" level=info msg="createCtr: deleting container ID 538bb1b109992ce2ce234957edae5565e9c483cb0f0c6d4208781320338d4672 from idIndex" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.396568828Z" level=info msg="createCtr: removing container 538bb1b109992ce2ce234957edae5565e9c483cb0f0c6d4208781320338d4672" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.396612269Z" level=info msg="createCtr: deleting container 538bb1b109992ce2ce234957edae5565e9c483cb0f0c6d4208781320338d4672 from storage" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.396771952Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.399840838Z" level=info msg="createCtr: deleting container ID dbfe9dcf924355fe67d799dd3987a44b22952cd4be50e254a75424a23ba37180 from idIndex" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.399919085Z" level=info msg="createCtr: removing container dbfe9dcf924355fe67d799dd3987a44b22952cd4be50e254a75424a23ba37180" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.399967256Z" level=info msg="createCtr: deleting container dbfe9dcf924355fe67d799dd3987a44b22952cd4be50e254a75424a23ba37180 from storage" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.40457004Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-158523_kube-system_08a2248bbc397b9ed927890e2073120b_0" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.404996713Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-158523_kube-system_589c70f36d169281ef056387fc3a74a2_0" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:23:37.721702   16665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:37.722256   16665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:37.724007   16665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:37.724532   16665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:37.726099   16665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:23:37 up  1:06,  0 user,  load average: 0.68, 0.21, 4.21
	Linux functional-158523 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:23:34 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:34 functional-158523 kubelet[14998]: E1009 19:23:34.394069   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-158523" podUID="8f4f9df5924bbaa4e1ec7f60e6576647"
	Oct 09 19:23:34 functional-158523 kubelet[14998]: E1009 19:23:34.394212   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:34 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:34 functional-158523 kubelet[14998]:  > podSandboxID="65203e222aa74740eff7a55e03a0b2e5e7c97409eb1aff251b14d64f4ad6aaa2"
	Oct 09 19:23:34 functional-158523 kubelet[14998]: E1009 19:23:34.394291   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:34 functional-158523 kubelet[14998]:         container kube-apiserver start failed in pod kube-apiserver-functional-158523_kube-system(8b0d1e7a228bb11c7e5ac0baa08c68e2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:34 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:34 functional-158523 kubelet[14998]: E1009 19:23:34.395491   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-158523" podUID="8b0d1e7a228bb11c7e5ac0baa08c68e2"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.362223   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.362475   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.404935   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > podSandboxID="c46b8882958a3d5604399e1a44a408e9b7fbd2d13564b122e7c9bc822d9ccdf7"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405081   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-158523_kube-system(08a2248bbc397b9ed927890e2073120b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405124   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-158523" podUID="08a2248bbc397b9ed927890e2073120b"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405285   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > podSandboxID="ec5fd20197d3cb2af48faa87c42dae73063f326b50e117bd23262f4dc00885b3"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405393   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         container kube-scheduler start failed in pod kube-scheduler-functional-158523_kube-system(589c70f36d169281ef056387fc3a74a2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.406610   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-158523" podUID="589c70f36d169281ef056387fc3a74a2"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523: exit status 2 (342.666688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-158523" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 status: exit status 2 (363.87089ms)

                                                
                                                
-- stdout --
	functional-158523
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-amd64 -p functional-158523 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (384.730021ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-amd64 -p functional-158523 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 status -o json: exit status 2 (418.023889ms)

                                                
                                                
-- stdout --
	{"Name":"functional-158523","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-amd64 -p functional-158523 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-158523
helpers_test.go:243: (dbg) docker inspect functional-158523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	        "Created": "2025-10-09T18:56:39.519997973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:56:39.555178961Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hosts",
	        "LogPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4-json.log",
	        "Name": "/functional-158523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-158523:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-158523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	                "LowerDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-158523",
	                "Source": "/var/lib/docker/volumes/functional-158523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-158523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-158523",
	                "name.minikube.sigs.k8s.io": "functional-158523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50f61b8c99f766763e9e30d37584e537ddf10f74215a382a4221cbe7f7c2e821",
	            "SandboxKey": "/var/run/docker/netns/50f61b8c99f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-158523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:a1:34:24:78:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6347eb7b6b2842e266f51d3873c8aa169a4287ea52510c3d62dc4ed41012963c",
	                    "EndpointID": "eb1ada23cbc058d52037dd94574627dd1b0fedf455f291e656f050bf06881952",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-158523",
	                        "dea3735c567c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523: exit status 2 (385.990223ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 logs -n 25
helpers_test.go:260: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-158523 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache     │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cache     │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ kubectl   │ functional-158523 kubectl -- --context functional-158523 get pods                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	│ start     │ -p functional-158523 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	│ config    │ functional-158523 config unset cpus                                                                                        │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ cp        │ functional-158523 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ service   │ functional-158523 service list                                                                                             │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ config    │ functional-158523 config get cpus                                                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ config    │ functional-158523 config set cpus 2                                                                                        │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ config    │ functional-158523 config get cpus                                                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ config    │ functional-158523 config unset cpus                                                                                        │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh -n functional-158523 sudo cat /home/docker/cp-test.txt                                               │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ config    │ functional-158523 config get cpus                                                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ service   │ functional-158523 service list -o json                                                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ start     │ -p functional-158523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ start     │ -p functional-158523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ cp        │ functional-158523 cp functional-158523:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3343028733/001/cp-test.txt │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ service   │ functional-158523 service --namespace=default --https --url hello-node                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ start     │ -p functional-158523 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ ssh       │ functional-158523 ssh -n functional-158523 sudo cat /home/docker/cp-test.txt                                               │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ service   │ functional-158523 service hello-node --url --format={{.IP}}                                                                │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-158523 --alsologtostderr -v=1                                                             │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ service   │ functional-158523 service hello-node --url                                                                                 │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ cp        │ functional-158523 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:23:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:23:36.037175  182999 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:23:36.037778  182999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:36.037796  182999 out.go:374] Setting ErrFile to fd 2...
	I1009 19:23:36.037812  182999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:36.038132  182999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:23:36.038788  182999 out.go:368] Setting JSON to false
	I1009 19:23:36.039721  182999 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3965,"bootTime":1760033851,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:23:36.039834  182999 start.go:143] virtualization: kvm guest
	I1009 19:23:36.041723  182999 out.go:179] * [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:23:36.043370  182999 notify.go:221] Checking for updates...
	I1009 19:23:36.043393  182999 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:23:36.044914  182999 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:23:36.046316  182999 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:23:36.050105  182999 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:23:36.051500  182999 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:23:36.052919  182999 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:23:36.054742  182999 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:23:36.055550  182999 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:23:36.081935  182999 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:23:36.082095  182999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:23:36.148703  182999 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:36.138476174 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:23:36.148869  182999 docker.go:319] overlay module found
	I1009 19:23:36.150853  182999 out.go:179] * Using the docker driver based on existing profile
	I1009 19:23:36.152205  182999 start.go:309] selected driver: docker
	I1009 19:23:36.152222  182999 start.go:930] validating driver "docker" against &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:23:36.152322  182999 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:23:36.152439  182999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:23:36.242576  182999 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:36.229646073 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:23:36.243640  182999 cni.go:84] Creating CNI manager for ""
	I1009 19:23:36.243714  182999 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:23:36.243784  182999 start.go:353] cluster config:
	{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:23:36.246434  182999 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.386809948Z" level=info msg="createCtr: removing container 7ca39f0bfdca7c6677a8404742b48165f0f4969589d4ccb2467e982a6dd7797a" id=f421fdd5-7a90-465c-a8ed-f9d66ced2939 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.386848933Z" level=info msg="createCtr: deleting container 7ca39f0bfdca7c6677a8404742b48165f0f4969589d4ccb2467e982a6dd7797a from storage" id=f421fdd5-7a90-465c-a8ed-f9d66ced2939 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:26 functional-158523 crio[5871]: time="2025-10-09T19:23:26.38924825Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-158523_kube-system_589c70f36d169281ef056387fc3a74a2_0" id=f421fdd5-7a90-465c-a8ed-f9d66ced2939 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.361011441Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=e5a50a4b-7bf7-4b2d-a68b-1a96d8693b6f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.361267089Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=c7204e19-93e4-4c92-aaa1-6a2a3d4d8f7a name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.362163961Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=bbbe2d58-d740-4428-bad4-0a0952ad7ac5 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.362212762Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=336eb4c6-1b11-4a53-950d-b6674319871e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.363180205Z" level=info msg="Creating container: kube-system/etcd-functional-158523/etcd" id=a361bc75-f795-41e7-b4ca-66571bdf9d8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.36345608Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.363466042Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-158523/kube-apiserver" id=e8fa8460-2b29-4b93-a688-665bd082facc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.363768632Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.367712782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.368150127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.370509113Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.371104503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.387697025Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a361bc75-f795-41e7-b4ca-66571bdf9d8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.389052704Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e8fa8460-2b29-4b93-a688-665bd082facc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.38919482Z" level=info msg="createCtr: deleting container ID 58355a528b323b53cc67c21e6b21e804ba56ea16056754d8d715f6703072b17e from idIndex" id=a361bc75-f795-41e7-b4ca-66571bdf9d8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.38923467Z" level=info msg="createCtr: removing container 58355a528b323b53cc67c21e6b21e804ba56ea16056754d8d715f6703072b17e" id=a361bc75-f795-41e7-b4ca-66571bdf9d8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.389268813Z" level=info msg="createCtr: deleting container 58355a528b323b53cc67c21e6b21e804ba56ea16056754d8d715f6703072b17e from storage" id=a361bc75-f795-41e7-b4ca-66571bdf9d8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.391548959Z" level=info msg="createCtr: deleting container ID 48e7d06964617ecb6465098a4e02d6e62f6de72bec3b6d68067bb7185b5532ad from idIndex" id=e8fa8460-2b29-4b93-a688-665bd082facc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.391599317Z" level=info msg="createCtr: removing container 48e7d06964617ecb6465098a4e02d6e62f6de72bec3b6d68067bb7185b5532ad" id=e8fa8460-2b29-4b93-a688-665bd082facc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.391642132Z" level=info msg="createCtr: deleting container 48e7d06964617ecb6465098a4e02d6e62f6de72bec3b6d68067bb7185b5532ad from storage" id=e8fa8460-2b29-4b93-a688-665bd082facc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.393554439Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-158523_kube-system_8f4f9df5924bbaa4e1ec7f60e6576647_0" id=a361bc75-f795-41e7-b4ca-66571bdf9d8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.393955241Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-158523_kube-system_8b0d1e7a228bb11c7e5ac0baa08c68e2_0" id=e8fa8460-2b29-4b93-a688-665bd082facc name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:23:37.385388   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:37.387459   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:37.389438   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:37.390013   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:37.391769   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:23:37 up  1:06,  0 user,  load average: 0.21, 0.11, 4.20
	Linux functional-158523 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:23:34 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:34 functional-158523 kubelet[14998]: E1009 19:23:34.394069   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-158523" podUID="8f4f9df5924bbaa4e1ec7f60e6576647"
	Oct 09 19:23:34 functional-158523 kubelet[14998]: E1009 19:23:34.394212   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:34 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:34 functional-158523 kubelet[14998]:  > podSandboxID="65203e222aa74740eff7a55e03a0b2e5e7c97409eb1aff251b14d64f4ad6aaa2"
	Oct 09 19:23:34 functional-158523 kubelet[14998]: E1009 19:23:34.394291   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:34 functional-158523 kubelet[14998]:         container kube-apiserver start failed in pod kube-apiserver-functional-158523_kube-system(8b0d1e7a228bb11c7e5ac0baa08c68e2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:34 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:34 functional-158523 kubelet[14998]: E1009 19:23:34.395491   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-158523" podUID="8b0d1e7a228bb11c7e5ac0baa08c68e2"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.362223   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.362475   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.404935   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > podSandboxID="c46b8882958a3d5604399e1a44a408e9b7fbd2d13564b122e7c9bc822d9ccdf7"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405081   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-158523_kube-system(08a2248bbc397b9ed927890e2073120b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405124   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-158523" podUID="08a2248bbc397b9ed927890e2073120b"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405285   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > podSandboxID="ec5fd20197d3cb2af48faa87c42dae73063f326b50e117bd23262f4dc00885b3"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405393   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         container kube-scheduler start failed in pod kube-scheduler-functional-158523_kube-system(589c70f36d169281ef056387fc3a74a2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.406610   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-158523" podUID="589c70f36d169281ef056387fc3a74a2"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523: exit status 2 (353.481182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-158523" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (2.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-158523 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-158523 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (68.751728ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-158523 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-158523 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-158523 describe po hello-node-connect: exit status 1 (62.829937ms)

                                                
                                                
** stderr ** 
	E1009 19:23:38.676784  185611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:38.677230  185611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:38.678680  185611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:38.678960  185611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:38.680452  185611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-158523 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-158523 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-158523 logs -l app=hello-node-connect: exit status 1 (56.28924ms)

                                                
                                                
** stderr ** 
	E1009 19:23:38.733519  185652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:38.733982  185652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:38.735425  185652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:38.735761  185652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-158523 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-158523 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-158523 describe svc hello-node-connect: exit status 1 (60.872684ms)

                                                
                                                
** stderr ** 
	E1009 19:23:38.794803  185686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:38.795142  185686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:38.796513  185686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:38.796780  185686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:38.798278  185686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-158523 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-158523
helpers_test.go:243: (dbg) docker inspect functional-158523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	        "Created": "2025-10-09T18:56:39.519997973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:56:39.555178961Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hosts",
	        "LogPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4-json.log",
	        "Name": "/functional-158523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-158523:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-158523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	                "LowerDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-158523",
	                "Source": "/var/lib/docker/volumes/functional-158523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-158523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-158523",
	                "name.minikube.sigs.k8s.io": "functional-158523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50f61b8c99f766763e9e30d37584e537ddf10f74215a382a4221cbe7f7c2e821",
	            "SandboxKey": "/var/run/docker/netns/50f61b8c99f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-158523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:a1:34:24:78:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6347eb7b6b2842e266f51d3873c8aa169a4287ea52510c3d62dc4ed41012963c",
	                    "EndpointID": "eb1ada23cbc058d52037dd94574627dd1b0fedf455f291e656f050bf06881952",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-158523",
	                        "dea3735c567c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523: exit status 2 (315.221979ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 logs -n 25
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ config    │ functional-158523 config get cpus                                                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ config    │ functional-158523 config unset cpus                                                                                        │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh -n functional-158523 sudo cat /home/docker/cp-test.txt                                               │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ config    │ functional-158523 config get cpus                                                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ service   │ functional-158523 service list -o json                                                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ start     │ -p functional-158523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ start     │ -p functional-158523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ cp        │ functional-158523 cp functional-158523:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3343028733/001/cp-test.txt │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ service   │ functional-158523 service --namespace=default --https --url hello-node                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ start     │ -p functional-158523 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ ssh       │ functional-158523 ssh -n functional-158523 sudo cat /home/docker/cp-test.txt                                               │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ service   │ functional-158523 service hello-node --url --format={{.IP}}                                                                │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-158523 --alsologtostderr -v=1                                                             │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ service   │ functional-158523 service hello-node --url                                                                                 │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ cp        │ functional-158523 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh -n functional-158523 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh echo hello                                                                                           │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh cat /etc/hostname                                                                                    │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ tunnel    │ functional-158523 tunnel --alsologtostderr                                                                                 │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ tunnel    │ functional-158523 tunnel --alsologtostderr                                                                                 │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ addons    │ functional-158523 addons list                                                                                              │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ tunnel    │ functional-158523 tunnel --alsologtostderr                                                                                 │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ addons    │ functional-158523 addons list -o json                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh sudo cat /etc/ssl/certs/141519.pem                                                                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh sudo cat /usr/share/ca-certificates/141519.pem                                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:23:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:23:36.037175  182999 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:23:36.037778  182999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:36.037796  182999 out.go:374] Setting ErrFile to fd 2...
	I1009 19:23:36.037812  182999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:36.038132  182999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:23:36.038788  182999 out.go:368] Setting JSON to false
	I1009 19:23:36.039721  182999 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3965,"bootTime":1760033851,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:23:36.039834  182999 start.go:143] virtualization: kvm guest
	I1009 19:23:36.041723  182999 out.go:179] * [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:23:36.043370  182999 notify.go:221] Checking for updates...
	I1009 19:23:36.043393  182999 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:23:36.044914  182999 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:23:36.046316  182999 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:23:36.050105  182999 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:23:36.051500  182999 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:23:36.052919  182999 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:23:36.054742  182999 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:23:36.055550  182999 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:23:36.081935  182999 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:23:36.082095  182999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:23:36.148703  182999 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:36.138476174 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:23:36.148869  182999 docker.go:319] overlay module found
	I1009 19:23:36.150853  182999 out.go:179] * Using the docker driver based on existing profile
	I1009 19:23:36.152205  182999 start.go:309] selected driver: docker
	I1009 19:23:36.152222  182999 start.go:930] validating driver "docker" against &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:23:36.152322  182999 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:23:36.152439  182999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:23:36.242576  182999 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:36.229646073 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:23:36.243640  182999 cni.go:84] Creating CNI manager for ""
	I1009 19:23:36.243714  182999 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:23:36.243784  182999 start.go:353] cluster config:
	{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:23:36.246434  182999 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.391642132Z" level=info msg="createCtr: deleting container 48e7d06964617ecb6465098a4e02d6e62f6de72bec3b6d68067bb7185b5532ad from storage" id=e8fa8460-2b29-4b93-a688-665bd082facc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.393554439Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-158523_kube-system_8f4f9df5924bbaa4e1ec7f60e6576647_0" id=a361bc75-f795-41e7-b4ca-66571bdf9d8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.393955241Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-158523_kube-system_8b0d1e7a228bb11c7e5ac0baa08c68e2_0" id=e8fa8460-2b29-4b93-a688-665bd082facc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.362795549Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=458e9ac2-8e82-42ad-9ed8-0176b0506eba name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.363449782Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=8931ed4c-df27-484a-9356-5e1a89e73ba0 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.364207793Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=710da863-06db-486d-b4c9-80a482d2e979 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.364942224Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1817f978-3968-47f0-815e-0370c8ea5da4 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.365297945Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-158523/kube-scheduler" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.365592193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.366507203Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-158523/kube-controller-manager" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.366773782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.371011337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.371617755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.375896811Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.376619155Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.394891438Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.396523636Z" level=info msg="createCtr: deleting container ID 538bb1b109992ce2ce234957edae5565e9c483cb0f0c6d4208781320338d4672 from idIndex" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.396568828Z" level=info msg="createCtr: removing container 538bb1b109992ce2ce234957edae5565e9c483cb0f0c6d4208781320338d4672" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.396612269Z" level=info msg="createCtr: deleting container 538bb1b109992ce2ce234957edae5565e9c483cb0f0c6d4208781320338d4672 from storage" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.396771952Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.399840838Z" level=info msg="createCtr: deleting container ID dbfe9dcf924355fe67d799dd3987a44b22952cd4be50e254a75424a23ba37180 from idIndex" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.399919085Z" level=info msg="createCtr: removing container dbfe9dcf924355fe67d799dd3987a44b22952cd4be50e254a75424a23ba37180" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.399967256Z" level=info msg="createCtr: deleting container dbfe9dcf924355fe67d799dd3987a44b22952cd4be50e254a75424a23ba37180 from storage" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.40457004Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-158523_kube-system_08a2248bbc397b9ed927890e2073120b_0" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.404996713Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-158523_kube-system_589c70f36d169281ef056387fc3a74a2_0" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:23:39.740886   17039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:39.741467   17039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:39.743093   17039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:39.743625   17039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:39.745236   17039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:23:39 up  1:06,  0 user,  load average: 0.68, 0.21, 4.21
	Linux functional-158523 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:23:34 functional-158523 kubelet[14998]: E1009 19:23:34.394291   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:34 functional-158523 kubelet[14998]:         container kube-apiserver start failed in pod kube-apiserver-functional-158523_kube-system(8b0d1e7a228bb11c7e5ac0baa08c68e2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:34 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:34 functional-158523 kubelet[14998]: E1009 19:23:34.395491   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-158523" podUID="8b0d1e7a228bb11c7e5ac0baa08c68e2"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.362223   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.362475   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.404935   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > podSandboxID="c46b8882958a3d5604399e1a44a408e9b7fbd2d13564b122e7c9bc822d9ccdf7"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405081   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-158523_kube-system(08a2248bbc397b9ed927890e2073120b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405124   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-158523" podUID="08a2248bbc397b9ed927890e2073120b"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405285   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > podSandboxID="ec5fd20197d3cb2af48faa87c42dae73063f326b50e117bd23262f4dc00885b3"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405393   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         container kube-scheduler start failed in pod kube-scheduler-functional-158523_kube-system(589c70f36d169281ef056387fc3a74a2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.406610   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-158523" podUID="589c70f36d169281ef056387fc3a74a2"
	Oct 09 19:23:38 functional-158523 kubelet[14998]: E1009 19:23:38.374163   14998 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-158523\" not found"
	Oct 09 19:23:38 functional-158523 kubelet[14998]: E1009 19:23:38.986984   14998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-158523?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 19:23:39 functional-158523 kubelet[14998]: I1009 19:23:39.150566   14998 kubelet_node_status.go:75] "Attempting to register node" node="functional-158523"
	Oct 09 19:23:39 functional-158523 kubelet[14998]: E1009 19:23:39.150948   14998 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-158523"
	Oct 09 19:23:39 functional-158523 kubelet[14998]: E1009 19:23:39.478604   14998 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523: exit status 2 (322.956977ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-158523" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (241.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1009 19:24:00.641473  141519 retry.go:31] will retry after 17.545763883s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1009 19:24:18.187784  141519 retry.go:31] will retry after 15.653335495s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1009 19:24:33.842161  141519 retry.go:31] will retry after 43.39745667s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test_pvc_test.go:50: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523: exit status 2 (313.874445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-158523" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-158523
helpers_test.go:243: (dbg) docker inspect functional-158523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	        "Created": "2025-10-09T18:56:39.519997973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:56:39.555178961Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hosts",
	        "LogPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4-json.log",
	        "Name": "/functional-158523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-158523:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-158523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	                "LowerDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-158523",
	                "Source": "/var/lib/docker/volumes/functional-158523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-158523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-158523",
	                "name.minikube.sigs.k8s.io": "functional-158523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50f61b8c99f766763e9e30d37584e537ddf10f74215a382a4221cbe7f7c2e821",
	            "SandboxKey": "/var/run/docker/netns/50f61b8c99f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-158523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:a1:34:24:78:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6347eb7b6b2842e266f51d3873c8aa169a4287ea52510c3d62dc4ed41012963c",
	                    "EndpointID": "eb1ada23cbc058d52037dd94574627dd1b0fedf455f291e656f050bf06881952",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-158523",
	                        "dea3735c567c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523: exit status 2 (302.844085ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 logs -n 25
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-158523 image ls                                                                                                │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image          │ functional-158523 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image          │ functional-158523 image save --daemon kicbase/echo-server:functional-158523 --alsologtostderr                             │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh            │ functional-158523 ssh findmnt -T /mount-9p | grep 9p                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh            │ functional-158523 ssh -- ls -la /mount-9p                                                                                 │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh            │ functional-158523 ssh sudo umount -f /mount-9p                                                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ mount          │ -p functional-158523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup157564093/001:/mount1 --alsologtostderr -v=1         │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ ssh            │ functional-158523 ssh findmnt -T /mount1                                                                                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ mount          │ -p functional-158523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup157564093/001:/mount2 --alsologtostderr -v=1         │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ mount          │ -p functional-158523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup157564093/001:/mount3 --alsologtostderr -v=1         │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ license        │                                                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ update-context │ functional-158523 update-context --alsologtostderr -v=2                                                                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh            │ functional-158523 ssh findmnt -T /mount1                                                                                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ update-context │ functional-158523 update-context --alsologtostderr -v=2                                                                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ update-context │ functional-158523 update-context --alsologtostderr -v=2                                                                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh            │ functional-158523 ssh findmnt -T /mount2                                                                                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh            │ functional-158523 ssh findmnt -T /mount3                                                                                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ mount          │ -p functional-158523 --kill=true                                                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ image          │ functional-158523 image ls --format short --alsologtostderr                                                               │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image          │ functional-158523 image ls --format yaml --alsologtostderr                                                                │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh            │ functional-158523 ssh pgrep buildkitd                                                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ image          │ functional-158523 image ls --format json --alsologtostderr                                                                │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image          │ functional-158523 image build -t localhost/my-image:functional-158523 testdata/build --alsologtostderr                    │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image          │ functional-158523 image ls --format table --alsologtostderr                                                               │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image          │ functional-158523 image ls                                                                                                │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:23:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:23:36.037175  182999 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:23:36.037778  182999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:36.037796  182999 out.go:374] Setting ErrFile to fd 2...
	I1009 19:23:36.037812  182999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:36.038132  182999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:23:36.038788  182999 out.go:368] Setting JSON to false
	I1009 19:23:36.039721  182999 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3965,"bootTime":1760033851,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:23:36.039834  182999 start.go:143] virtualization: kvm guest
	I1009 19:23:36.041723  182999 out.go:179] * [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:23:36.043370  182999 notify.go:221] Checking for updates...
	I1009 19:23:36.043393  182999 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:23:36.044914  182999 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:23:36.046316  182999 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:23:36.050105  182999 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:23:36.051500  182999 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:23:36.052919  182999 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:23:36.054742  182999 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:23:36.055550  182999 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:23:36.081935  182999 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:23:36.082095  182999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:23:36.148703  182999 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:36.138476174 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:23:36.148869  182999 docker.go:319] overlay module found
	I1009 19:23:36.150853  182999 out.go:179] * Using the docker driver based on existing profile
	I1009 19:23:36.152205  182999 start.go:309] selected driver: docker
	I1009 19:23:36.152222  182999 start.go:930] validating driver "docker" against &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:23:36.152322  182999 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:23:36.152439  182999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:23:36.242576  182999 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:36.229646073 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:23:36.243640  182999 cni.go:84] Creating CNI manager for ""
	I1009 19:23:36.243714  182999 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:23:36.243784  182999 start.go:353] cluster config:
	{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:23:36.246434  182999 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 09 19:27:30 functional-158523 crio[5871]: time="2025-10-09T19:27:30.384095026Z" level=info msg="createCtr: removing container 872f589e20291c8efba66cfa426529b89f45da4aa676beaf3669c1c280818bbb" id=bd9e5021-67c5-4154-9851-01f69498e953 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:30 functional-158523 crio[5871]: time="2025-10-09T19:27:30.384136135Z" level=info msg="createCtr: deleting container 872f589e20291c8efba66cfa426529b89f45da4aa676beaf3669c1c280818bbb from storage" id=bd9e5021-67c5-4154-9851-01f69498e953 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:30 functional-158523 crio[5871]: time="2025-10-09T19:27:30.38646292Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-158523_kube-system_08a2248bbc397b9ed927890e2073120b_0" id=bd9e5021-67c5-4154-9851-01f69498e953 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:32 functional-158523 crio[5871]: time="2025-10-09T19:27:32.36097626Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=bc614b6a-53f7-4ef7-b93e-77d4bae3a1e4 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:27:32 functional-158523 crio[5871]: time="2025-10-09T19:27:32.361982587Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=c51af17a-3abb-493d-ac8e-aeb526fc40fc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:27:32 functional-158523 crio[5871]: time="2025-10-09T19:27:32.362979494Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-158523/kube-scheduler" id=43b885cc-aa20-41c7-bd15-d376a3056cc0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:32 functional-158523 crio[5871]: time="2025-10-09T19:27:32.363270424Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:27:32 functional-158523 crio[5871]: time="2025-10-09T19:27:32.367717312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:27:32 functional-158523 crio[5871]: time="2025-10-09T19:27:32.368314956Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:27:32 functional-158523 crio[5871]: time="2025-10-09T19:27:32.381284323Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=43b885cc-aa20-41c7-bd15-d376a3056cc0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:32 functional-158523 crio[5871]: time="2025-10-09T19:27:32.382718142Z" level=info msg="createCtr: deleting container ID f9d515b6e25e17c66bca29d9dc1a58864e223422006fbdaaac25b7537f7d6391 from idIndex" id=43b885cc-aa20-41c7-bd15-d376a3056cc0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:32 functional-158523 crio[5871]: time="2025-10-09T19:27:32.382757727Z" level=info msg="createCtr: removing container f9d515b6e25e17c66bca29d9dc1a58864e223422006fbdaaac25b7537f7d6391" id=43b885cc-aa20-41c7-bd15-d376a3056cc0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:32 functional-158523 crio[5871]: time="2025-10-09T19:27:32.382790839Z" level=info msg="createCtr: deleting container f9d515b6e25e17c66bca29d9dc1a58864e223422006fbdaaac25b7537f7d6391 from storage" id=43b885cc-aa20-41c7-bd15-d376a3056cc0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:32 functional-158523 crio[5871]: time="2025-10-09T19:27:32.384921765Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-158523_kube-system_589c70f36d169281ef056387fc3a74a2_0" id=43b885cc-aa20-41c7-bd15-d376a3056cc0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:36 functional-158523 crio[5871]: time="2025-10-09T19:27:36.361739194Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=77eb3975-1cb7-4bce-b014-31f67f5a78fe name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:27:36 functional-158523 crio[5871]: time="2025-10-09T19:27:36.362722101Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=4c9319fd-78e5-4eef-b33e-249e53dc16c1 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:27:36 functional-158523 crio[5871]: time="2025-10-09T19:27:36.363699318Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-158523/kube-apiserver" id=8b34d9b8-3015-42d9-81ad-bd7d242334e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:36 functional-158523 crio[5871]: time="2025-10-09T19:27:36.363945782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:27:36 functional-158523 crio[5871]: time="2025-10-09T19:27:36.367400365Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:27:36 functional-158523 crio[5871]: time="2025-10-09T19:27:36.367834708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:27:36 functional-158523 crio[5871]: time="2025-10-09T19:27:36.385843709Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8b34d9b8-3015-42d9-81ad-bd7d242334e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:36 functional-158523 crio[5871]: time="2025-10-09T19:27:36.387260719Z" level=info msg="createCtr: deleting container ID 86704c8cd2985b1561ce8260802328887c9a9edcaa19ea0726f96b76a11daf04 from idIndex" id=8b34d9b8-3015-42d9-81ad-bd7d242334e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:36 functional-158523 crio[5871]: time="2025-10-09T19:27:36.387301098Z" level=info msg="createCtr: removing container 86704c8cd2985b1561ce8260802328887c9a9edcaa19ea0726f96b76a11daf04" id=8b34d9b8-3015-42d9-81ad-bd7d242334e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:36 functional-158523 crio[5871]: time="2025-10-09T19:27:36.387334117Z" level=info msg="createCtr: deleting container 86704c8cd2985b1561ce8260802328887c9a9edcaa19ea0726f96b76a11daf04 from storage" id=8b34d9b8-3015-42d9-81ad-bd7d242334e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:36 functional-158523 crio[5871]: time="2025-10-09T19:27:36.389449682Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-158523_kube-system_8b0d1e7a228bb11c7e5ac0baa08c68e2_0" id=8b34d9b8-3015-42d9-81ad-bd7d242334e3 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:27:38.386207   19220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:27:38.386845   19220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:27:38.388250   19220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:27:38.388738   19220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:27:38.390394   19220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:27:38 up  1:10,  0 user,  load average: 0.17, 0.13, 3.26
	Linux functional-158523 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:27:30 functional-158523 kubelet[14998]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-158523_kube-system(08a2248bbc397b9ed927890e2073120b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:27:30 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:27:30 functional-158523 kubelet[14998]: E1009 19:27:30.386975   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-158523" podUID="08a2248bbc397b9ed927890e2073120b"
	Oct 09 19:27:32 functional-158523 kubelet[14998]: E1009 19:27:32.360474   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:27:32 functional-158523 kubelet[14998]: E1009 19:27:32.385223   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:27:32 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:27:32 functional-158523 kubelet[14998]:  > podSandboxID="ec5fd20197d3cb2af48faa87c42dae73063f326b50e117bd23262f4dc00885b3"
	Oct 09 19:27:32 functional-158523 kubelet[14998]: E1009 19:27:32.385326   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:27:32 functional-158523 kubelet[14998]:         container kube-scheduler start failed in pod kube-scheduler-functional-158523_kube-system(589c70f36d169281ef056387fc3a74a2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:27:32 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:27:32 functional-158523 kubelet[14998]: E1009 19:27:32.385356   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-158523" podUID="589c70f36d169281ef056387fc3a74a2"
	Oct 09 19:27:33 functional-158523 kubelet[14998]: E1009 19:27:33.108889   14998 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-158523.186ce8d7e4fa54bc\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-158523.186ce8d7e4fa54bc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-158523,UID:functional-158523,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-158523 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-158523,},FirstTimestamp:2025-10-09 19:19:28.352244924 +0000 UTC m=+0.607978562,LastTimestamp:2025-10-09 19:19:28.354154321 +0000 UTC m=+0.609887967,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Repo
rtingInstance:functional-158523,}"
	Oct 09 19:27:36 functional-158523 kubelet[14998]: E1009 19:27:36.361195   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:27:36 functional-158523 kubelet[14998]: E1009 19:27:36.389814   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:27:36 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:27:36 functional-158523 kubelet[14998]:  > podSandboxID="65203e222aa74740eff7a55e03a0b2e5e7c97409eb1aff251b14d64f4ad6aaa2"
	Oct 09 19:27:36 functional-158523 kubelet[14998]: E1009 19:27:36.389958   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:27:36 functional-158523 kubelet[14998]:         container kube-apiserver start failed in pod kube-apiserver-functional-158523_kube-system(8b0d1e7a228bb11c7e5ac0baa08c68e2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:27:36 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:27:36 functional-158523 kubelet[14998]: E1009 19:27:36.390002   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-158523" podUID="8b0d1e7a228bb11c7e5ac0baa08c68e2"
	Oct 09 19:27:37 functional-158523 kubelet[14998]: E1009 19:27:37.028367   14998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-158523?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 19:27:37 functional-158523 kubelet[14998]: E1009 19:27:37.145069   14998 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-158523&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 09 19:27:37 functional-158523 kubelet[14998]: I1009 19:27:37.226052   14998 kubelet_node_status.go:75] "Attempting to register node" node="functional-158523"
	Oct 09 19:27:37 functional-158523 kubelet[14998]: E1009 19:27:37.226460   14998 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-158523"
	Oct 09 19:27:38 functional-158523 kubelet[14998]: E1009 19:27:38.390281   14998 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-158523\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523: exit status 2 (304.745537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-158523" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (241.59s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-158523 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) Non-zero exit: kubectl --context functional-158523 replace --force -f testdata/mysql.yaml: exit status 1 (51.912536ms)

                                                
                                                
** stderr ** 
	E1009 19:23:46.990077  189855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:46.990732  189855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1800: failed to kubectl replace mysql: args "kubectl --context functional-158523 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-158523
helpers_test.go:243: (dbg) docker inspect functional-158523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	        "Created": "2025-10-09T18:56:39.519997973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:56:39.555178961Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hosts",
	        "LogPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4-json.log",
	        "Name": "/functional-158523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-158523:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-158523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	                "LowerDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-158523",
	                "Source": "/var/lib/docker/volumes/functional-158523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-158523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-158523",
	                "name.minikube.sigs.k8s.io": "functional-158523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50f61b8c99f766763e9e30d37584e537ddf10f74215a382a4221cbe7f7c2e821",
	            "SandboxKey": "/var/run/docker/netns/50f61b8c99f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-158523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:a1:34:24:78:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6347eb7b6b2842e266f51d3873c8aa169a4287ea52510c3d62dc4ed41012963c",
	                    "EndpointID": "eb1ada23cbc058d52037dd94574627dd1b0fedf455f291e656f050bf06881952",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-158523",
	                        "dea3735c567c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523
I1009 19:23:47.020258  141519 retry.go:31] will retry after 3.908607827s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523: exit status 2 (326.061693ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 logs -n 25
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-158523 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh     │ functional-158523 ssh sudo systemctl is-active containerd                                                                                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ ssh     │ functional-158523 ssh sudo cat /etc/test/nested/copy/141519/hosts                                                                                               │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image load --daemon kicbase/echo-server:functional-158523 --alsologtostderr                                                                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls                                                                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image load --daemon kicbase/echo-server:functional-158523 --alsologtostderr                                                                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ mount   │ -p functional-158523 /tmp/TestFunctionalparallelMountCmdany-port379358751/001:/mount-9p --alsologtostderr -v=1                                                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ ssh     │ functional-158523 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ image   │ functional-158523 image ls                                                                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh     │ functional-158523 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh     │ functional-158523 ssh -- ls -la /mount-9p                                                                                                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image load --daemon kicbase/echo-server:functional-158523 --alsologtostderr                                                                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh     │ functional-158523 ssh cat /mount-9p/test-1760037823491782947                                                                                                    │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh     │ functional-158523 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                                │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ image   │ functional-158523 image ls                                                                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh     │ functional-158523 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image save kicbase/echo-server:functional-158523 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh     │ functional-158523 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ mount   │ -p functional-158523 /tmp/TestFunctionalparallelMountCmdspecific-port1780211196/001:/mount-9p --alsologtostderr -v=1 --port 46464                               │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ image   │ functional-158523 image rm kicbase/echo-server:functional-158523 --alsologtostderr                                                                              │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls                                                                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image save --daemon kicbase/echo-server:functional-158523 --alsologtostderr                                                                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh     │ functional-158523 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh     │ functional-158523 ssh -- ls -la /mount-9p                                                                                                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:23:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:23:36.037175  182999 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:23:36.037778  182999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:36.037796  182999 out.go:374] Setting ErrFile to fd 2...
	I1009 19:23:36.037812  182999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:36.038132  182999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:23:36.038788  182999 out.go:368] Setting JSON to false
	I1009 19:23:36.039721  182999 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3965,"bootTime":1760033851,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:23:36.039834  182999 start.go:143] virtualization: kvm guest
	I1009 19:23:36.041723  182999 out.go:179] * [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:23:36.043370  182999 notify.go:221] Checking for updates...
	I1009 19:23:36.043393  182999 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:23:36.044914  182999 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:23:36.046316  182999 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:23:36.050105  182999 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:23:36.051500  182999 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:23:36.052919  182999 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:23:36.054742  182999 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:23:36.055550  182999 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:23:36.081935  182999 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:23:36.082095  182999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:23:36.148703  182999 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:36.138476174 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:23:36.148869  182999 docker.go:319] overlay module found
	I1009 19:23:36.150853  182999 out.go:179] * Using the docker driver based on existing profile
	I1009 19:23:36.152205  182999 start.go:309] selected driver: docker
	I1009 19:23:36.152222  182999 start.go:930] validating driver "docker" against &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:23:36.152322  182999 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:23:36.152439  182999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:23:36.242576  182999 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:36.229646073 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:23:36.243640  182999 cni.go:84] Creating CNI manager for ""
	I1009 19:23:36.243714  182999 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:23:36.243784  182999 start.go:353] cluster config:
	{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:23:36.246434  182999 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 09 19:23:45 functional-158523 crio[5871]: time="2025-10-09T19:23:45.367363831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:45 functional-158523 crio[5871]: time="2025-10-09T19:23:45.367788576Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:45 functional-158523 crio[5871]: time="2025-10-09T19:23:45.380324941Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=90f84abb-ddbf-4af6-947f-6dc54986010b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:45 functional-158523 crio[5871]: time="2025-10-09T19:23:45.381807564Z" level=info msg="createCtr: deleting container ID 8b40c671c3a8230bbe5000d004c33baa79686480065bcb75df5f22fd088d4408 from idIndex" id=90f84abb-ddbf-4af6-947f-6dc54986010b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:45 functional-158523 crio[5871]: time="2025-10-09T19:23:45.381853743Z" level=info msg="createCtr: removing container 8b40c671c3a8230bbe5000d004c33baa79686480065bcb75df5f22fd088d4408" id=90f84abb-ddbf-4af6-947f-6dc54986010b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:45 functional-158523 crio[5871]: time="2025-10-09T19:23:45.381897335Z" level=info msg="createCtr: deleting container 8b40c671c3a8230bbe5000d004c33baa79686480065bcb75df5f22fd088d4408 from storage" id=90f84abb-ddbf-4af6-947f-6dc54986010b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:45 functional-158523 crio[5871]: time="2025-10-09T19:23:45.384446884Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-158523_kube-system_8f4f9df5924bbaa4e1ec7f60e6576647_0" id=90f84abb-ddbf-4af6-947f-6dc54986010b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:46 functional-158523 crio[5871]: time="2025-10-09T19:23:46.095374298Z" level=info msg="Checking image status: kicbase/echo-server:functional-158523" id=9adbfe67-459d-41a9-9c3d-d476cd6c8a27 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:46 functional-158523 crio[5871]: time="2025-10-09T19:23:46.120823829Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-158523" id=682b1bed-1e9a-4e87-bf96-e9dadebe1218 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:46 functional-158523 crio[5871]: time="2025-10-09T19:23:46.120978558Z" level=info msg="Image docker.io/kicbase/echo-server:functional-158523 not found" id=682b1bed-1e9a-4e87-bf96-e9dadebe1218 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:46 functional-158523 crio[5871]: time="2025-10-09T19:23:46.121013503Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-158523 found" id=682b1bed-1e9a-4e87-bf96-e9dadebe1218 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:46 functional-158523 crio[5871]: time="2025-10-09T19:23:46.148713817Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-158523" id=90288b50-40e6-405e-90dd-4357be0f5df6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:46 functional-158523 crio[5871]: time="2025-10-09T19:23:46.148876743Z" level=info msg="Image localhost/kicbase/echo-server:functional-158523 not found" id=90288b50-40e6-405e-90dd-4357be0f5df6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:46 functional-158523 crio[5871]: time="2025-10-09T19:23:46.148928733Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-158523 found" id=90288b50-40e6-405e-90dd-4357be0f5df6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:47 functional-158523 crio[5871]: time="2025-10-09T19:23:47.361455246Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=e0e05026-e295-4f4c-bcd3-e10e0aeda719 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:47 functional-158523 crio[5871]: time="2025-10-09T19:23:47.36259504Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=e9422ef6-4240-4a9a-b1d4-7119d97feb8b name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:47 functional-158523 crio[5871]: time="2025-10-09T19:23:47.363758099Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-158523/kube-apiserver" id=ebb02f08-2485-4af7-8625-c2974450c2a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:47 functional-158523 crio[5871]: time="2025-10-09T19:23:47.364052028Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:47 functional-158523 crio[5871]: time="2025-10-09T19:23:47.367808524Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:47 functional-158523 crio[5871]: time="2025-10-09T19:23:47.368464369Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:47 functional-158523 crio[5871]: time="2025-10-09T19:23:47.387981856Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ebb02f08-2485-4af7-8625-c2974450c2a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:47 functional-158523 crio[5871]: time="2025-10-09T19:23:47.389481835Z" level=info msg="createCtr: deleting container ID 51bcc6e7177dfceb6d9b4fd6b007f4ffd1145a7e38fefc9b8f4a29bd64229d86 from idIndex" id=ebb02f08-2485-4af7-8625-c2974450c2a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:47 functional-158523 crio[5871]: time="2025-10-09T19:23:47.389526218Z" level=info msg="createCtr: removing container 51bcc6e7177dfceb6d9b4fd6b007f4ffd1145a7e38fefc9b8f4a29bd64229d86" id=ebb02f08-2485-4af7-8625-c2974450c2a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:47 functional-158523 crio[5871]: time="2025-10-09T19:23:47.389563207Z" level=info msg="createCtr: deleting container 51bcc6e7177dfceb6d9b4fd6b007f4ffd1145a7e38fefc9b8f4a29bd64229d86 from storage" id=ebb02f08-2485-4af7-8625-c2974450c2a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:47 functional-158523 crio[5871]: time="2025-10-09T19:23:47.391842547Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-158523_kube-system_8b0d1e7a228bb11c7e5ac0baa08c68e2_0" id=ebb02f08-2485-4af7-8625-c2974450c2a7 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:23:47.977982   18009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:47.978667   18009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:47.980350   18009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:47.980805   18009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:47.982973   18009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:23:48 up  1:06,  0 user,  load average: 0.65, 0.22, 4.17
	Linux functional-158523 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:23:39 functional-158523 kubelet[14998]: I1009 19:23:39.150566   14998 kubelet_node_status.go:75] "Attempting to register node" node="functional-158523"
	Oct 09 19:23:39 functional-158523 kubelet[14998]: E1009 19:23:39.150948   14998 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-158523"
	Oct 09 19:23:39 functional-158523 kubelet[14998]: E1009 19:23:39.478604   14998 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 19:23:40 functional-158523 kubelet[14998]: E1009 19:23:40.865496   14998 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-158523&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 09 19:23:42 functional-158523 kubelet[14998]: E1009 19:23:42.192342   14998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-158523.186ce8d7e4fa8e80  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-158523,UID:functional-158523,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-158523 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-158523,},FirstTimestamp:2025-10-09 19:19:28.352259712 +0000 UTC m=+0.607993345,LastTimestamp:2025-10-09 19:19:28.352259712 +0000 UTC m=+0.607993345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-158523,}"
	Oct 09 19:23:44 functional-158523 kubelet[14998]: E1009 19:23:44.778696   14998 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 09 19:23:45 functional-158523 kubelet[14998]: E1009 19:23:45.360737   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:23:45 functional-158523 kubelet[14998]: E1009 19:23:45.384774   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:45 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:45 functional-158523 kubelet[14998]:  > podSandboxID="6383e73654a94c651294b3fe09e624ff42b7d1cd5f16f1695f8c59205622b197"
	Oct 09 19:23:45 functional-158523 kubelet[14998]: E1009 19:23:45.384878   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:45 functional-158523 kubelet[14998]:         container etcd start failed in pod etcd-functional-158523_kube-system(8f4f9df5924bbaa4e1ec7f60e6576647): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:45 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:45 functional-158523 kubelet[14998]: E1009 19:23:45.384918   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-158523" podUID="8f4f9df5924bbaa4e1ec7f60e6576647"
	Oct 09 19:23:45 functional-158523 kubelet[14998]: E1009 19:23:45.988573   14998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-158523?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 19:23:46 functional-158523 kubelet[14998]: I1009 19:23:46.152930   14998 kubelet_node_status.go:75] "Attempting to register node" node="functional-158523"
	Oct 09 19:23:46 functional-158523 kubelet[14998]: E1009 19:23:46.153406   14998 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-158523"
	Oct 09 19:23:47 functional-158523 kubelet[14998]: E1009 19:23:47.360911   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:23:47 functional-158523 kubelet[14998]: E1009 19:23:47.392195   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:47 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:47 functional-158523 kubelet[14998]:  > podSandboxID="65203e222aa74740eff7a55e03a0b2e5e7c97409eb1aff251b14d64f4ad6aaa2"
	Oct 09 19:23:47 functional-158523 kubelet[14998]: E1009 19:23:47.392307   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:47 functional-158523 kubelet[14998]:         container kube-apiserver start failed in pod kube-apiserver-functional-158523_kube-system(8b0d1e7a228bb11c7e5ac0baa08c68e2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:47 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:47 functional-158523 kubelet[14998]: E1009 19:23:47.392338   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-158523" podUID="8b0d1e7a228bb11c7e5ac0baa08c68e2"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523: exit status 2 (322.816492ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-158523" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-158523 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-158523 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (50.173732ms)

                                                
                                                
** stderr ** 
	E1009 19:23:41.019360  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.019736  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.021232  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.021575  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.022987  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-158523 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1009 19:23:41.019360  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.019736  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.021232  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.021575  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.022987  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1009 19:23:41.019360  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.019736  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.021232  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.021575  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.022987  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1009 19:23:41.019360  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.019736  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.021232  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.021575  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.022987  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1009 19:23:41.019360  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.019736  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.021232  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.021575  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.022987  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1009 19:23:41.019360  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.019736  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.021232  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.021575  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:23:41.022987  187077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-158523
helpers_test.go:243: (dbg) docker inspect functional-158523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	        "Created": "2025-10-09T18:56:39.519997973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:56:39.555178961Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/hosts",
	        "LogPath": "/var/lib/docker/containers/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4/dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4-json.log",
	        "Name": "/functional-158523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-158523:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-158523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dea3735c567c565098ce3843772aca3efe758d2df336df2ecf9bed824c9199d4",
	                "LowerDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a222508f56b7f66f633411a4eddbe0d76684a8983ec91da130c703fcb2f518f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-158523",
	                "Source": "/var/lib/docker/volumes/functional-158523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-158523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-158523",
	                "name.minikube.sigs.k8s.io": "functional-158523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50f61b8c99f766763e9e30d37584e537ddf10f74215a382a4221cbe7f7c2e821",
	            "SandboxKey": "/var/run/docker/netns/50f61b8c99f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-158523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:a1:34:24:78:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6347eb7b6b2842e266f51d3873c8aa169a4287ea52510c3d62dc4ed41012963c",
	                    "EndpointID": "eb1ada23cbc058d52037dd94574627dd1b0fedf455f291e656f050bf06881952",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-158523",
	                        "dea3735c567c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-158523 -n functional-158523: exit status 2 (302.638356ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 logs -n 25
helpers_test.go:260: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp        │ functional-158523 cp functional-158523:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3343028733/001/cp-test.txt │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ service   │ functional-158523 service --namespace=default --https --url hello-node                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ start     │ -p functional-158523 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ ssh       │ functional-158523 ssh -n functional-158523 sudo cat /home/docker/cp-test.txt                                               │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ service   │ functional-158523 service hello-node --url --format={{.IP}}                                                                │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-158523 --alsologtostderr -v=1                                                             │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ service   │ functional-158523 service hello-node --url                                                                                 │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ cp        │ functional-158523 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh -n functional-158523 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh echo hello                                                                                           │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh cat /etc/hostname                                                                                    │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ tunnel    │ functional-158523 tunnel --alsologtostderr                                                                                 │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ tunnel    │ functional-158523 tunnel --alsologtostderr                                                                                 │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ addons    │ functional-158523 addons list                                                                                              │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ tunnel    │ functional-158523 tunnel --alsologtostderr                                                                                 │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ addons    │ functional-158523 addons list -o json                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh sudo cat /etc/ssl/certs/141519.pem                                                                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh sudo cat /usr/share/ca-certificates/141519.pem                                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh sudo cat /etc/ssl/certs/51391683.0                                                                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh sudo cat /etc/ssl/certs/1415192.pem                                                                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh sudo cat /usr/share/ca-certificates/1415192.pem                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh sudo systemctl is-active docker                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ ssh       │ functional-158523 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                   │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh       │ functional-158523 ssh sudo systemctl is-active containerd                                                                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ ssh       │ functional-158523 ssh sudo cat /etc/test/nested/copy/141519/hosts                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:23:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:23:36.037175  182999 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:23:36.037778  182999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:36.037796  182999 out.go:374] Setting ErrFile to fd 2...
	I1009 19:23:36.037812  182999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:36.038132  182999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:23:36.038788  182999 out.go:368] Setting JSON to false
	I1009 19:23:36.039721  182999 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3965,"bootTime":1760033851,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:23:36.039834  182999 start.go:143] virtualization: kvm guest
	I1009 19:23:36.041723  182999 out.go:179] * [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:23:36.043370  182999 notify.go:221] Checking for updates...
	I1009 19:23:36.043393  182999 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:23:36.044914  182999 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:23:36.046316  182999 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:23:36.050105  182999 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:23:36.051500  182999 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:23:36.052919  182999 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:23:36.054742  182999 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:23:36.055550  182999 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:23:36.081935  182999 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:23:36.082095  182999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:23:36.148703  182999 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:36.138476174 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:23:36.148869  182999 docker.go:319] overlay module found
	I1009 19:23:36.150853  182999 out.go:179] * Using the docker driver based on existing profile
	I1009 19:23:36.152205  182999 start.go:309] selected driver: docker
	I1009 19:23:36.152222  182999 start.go:930] validating driver "docker" against &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:23:36.152322  182999 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:23:36.152439  182999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:23:36.242576  182999 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:36.229646073 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:23:36.243640  182999 cni.go:84] Creating CNI manager for ""
	I1009 19:23:36.243714  182999 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:23:36.243784  182999 start.go:353] cluster config:
	{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:23:36.246434  182999 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.391642132Z" level=info msg="createCtr: deleting container 48e7d06964617ecb6465098a4e02d6e62f6de72bec3b6d68067bb7185b5532ad from storage" id=e8fa8460-2b29-4b93-a688-665bd082facc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.393554439Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-158523_kube-system_8f4f9df5924bbaa4e1ec7f60e6576647_0" id=a361bc75-f795-41e7-b4ca-66571bdf9d8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:34 functional-158523 crio[5871]: time="2025-10-09T19:23:34.393955241Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-158523_kube-system_8b0d1e7a228bb11c7e5ac0baa08c68e2_0" id=e8fa8460-2b29-4b93-a688-665bd082facc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.362795549Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=458e9ac2-8e82-42ad-9ed8-0176b0506eba name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.363449782Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=8931ed4c-df27-484a-9356-5e1a89e73ba0 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.364207793Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=710da863-06db-486d-b4c9-80a482d2e979 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.364942224Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1817f978-3968-47f0-815e-0370c8ea5da4 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.365297945Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-158523/kube-scheduler" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.365592193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.366507203Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-158523/kube-controller-manager" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.366773782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.371011337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.371617755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.375896811Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.376619155Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.394891438Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.396523636Z" level=info msg="createCtr: deleting container ID 538bb1b109992ce2ce234957edae5565e9c483cb0f0c6d4208781320338d4672 from idIndex" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.396568828Z" level=info msg="createCtr: removing container 538bb1b109992ce2ce234957edae5565e9c483cb0f0c6d4208781320338d4672" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.396612269Z" level=info msg="createCtr: deleting container 538bb1b109992ce2ce234957edae5565e9c483cb0f0c6d4208781320338d4672 from storage" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.396771952Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.399840838Z" level=info msg="createCtr: deleting container ID dbfe9dcf924355fe67d799dd3987a44b22952cd4be50e254a75424a23ba37180 from idIndex" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.399919085Z" level=info msg="createCtr: removing container dbfe9dcf924355fe67d799dd3987a44b22952cd4be50e254a75424a23ba37180" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.399967256Z" level=info msg="createCtr: deleting container dbfe9dcf924355fe67d799dd3987a44b22952cd4be50e254a75424a23ba37180 from storage" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.40457004Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-158523_kube-system_08a2248bbc397b9ed927890e2073120b_0" id=781431b2-8915-463f-8e5a-d7f2565f30a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:23:37 functional-158523 crio[5871]: time="2025-10-09T19:23:37.404996713Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-158523_kube-system_589c70f36d169281ef056387fc3a74a2_0" id=75da15fb-02f3-48d8-84e1-896f1291da55 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:23:41.920220   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:41.921081   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:41.922139   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:41.922722   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 19:23:41.924288   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:23:41 up  1:06,  0 user,  load average: 0.68, 0.21, 4.21
	Linux functional-158523 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:23:34 functional-158523 kubelet[14998]:         container kube-apiserver start failed in pod kube-apiserver-functional-158523_kube-system(8b0d1e7a228bb11c7e5ac0baa08c68e2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:34 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:34 functional-158523 kubelet[14998]: E1009 19:23:34.395491   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-158523" podUID="8b0d1e7a228bb11c7e5ac0baa08c68e2"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.362223   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.362475   14998 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-158523\" not found" node="functional-158523"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.404935   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > podSandboxID="c46b8882958a3d5604399e1a44a408e9b7fbd2d13564b122e7c9bc822d9ccdf7"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405081   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-158523_kube-system(08a2248bbc397b9ed927890e2073120b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405124   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-158523" podUID="08a2248bbc397b9ed927890e2073120b"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405285   14998 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > podSandboxID="ec5fd20197d3cb2af48faa87c42dae73063f326b50e117bd23262f4dc00885b3"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.405393   14998 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:23:37 functional-158523 kubelet[14998]:         container kube-scheduler start failed in pod kube-scheduler-functional-158523_kube-system(589c70f36d169281ef056387fc3a74a2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:23:37 functional-158523 kubelet[14998]:  > logger="UnhandledError"
	Oct 09 19:23:37 functional-158523 kubelet[14998]: E1009 19:23:37.406610   14998 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-158523" podUID="589c70f36d169281ef056387fc3a74a2"
	Oct 09 19:23:38 functional-158523 kubelet[14998]: E1009 19:23:38.374163   14998 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-158523\" not found"
	Oct 09 19:23:38 functional-158523 kubelet[14998]: E1009 19:23:38.986984   14998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-158523?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 19:23:39 functional-158523 kubelet[14998]: I1009 19:23:39.150566   14998 kubelet_node_status.go:75] "Attempting to register node" node="functional-158523"
	Oct 09 19:23:39 functional-158523 kubelet[14998]: E1009 19:23:39.150948   14998 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-158523"
	Oct 09 19:23:39 functional-158523 kubelet[14998]: E1009 19:23:39.478604   14998 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 19:23:40 functional-158523 kubelet[14998]: E1009 19:23:40.865496   14998 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-158523&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523
I1009 19:23:42.155304  141519 retry.go:31] will retry after 4.864373985s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-158523 -n functional-158523: exit status 2 (312.329998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-158523" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-158523 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-158523 create deployment hello-node --image kicbase/echo-server: exit status 1 (62.023756ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-158523 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 service list: exit status 103 (319.592994ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-158523 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-158523"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-amd64 -p functional-158523 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-158523 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-158523\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 service list -o json: exit status 103 (345.260628ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-158523 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-158523"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-amd64 -p functional-158523 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 service --namespace=default --https --url hello-node: exit status 103 (340.63674ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-158523 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-158523"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-158523 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 service hello-node --url --format={{.IP}}: exit status 103 (348.415421ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-158523 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-158523"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-158523 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-158523 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-158523\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 service hello-node --url: exit status 103 (304.861488ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-158523 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-158523"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-158523 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-158523 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-158523"
functional_test.go:1579: failed to parse "* The control-plane node functional-158523 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-158523\"": parse "* The control-plane node functional-158523 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-158523\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-158523 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-158523 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1009 19:23:38.205394  185144 out.go:360] Setting OutFile to fd 1 ...
I1009 19:23:38.205734  185144 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:38.205747  185144 out.go:374] Setting ErrFile to fd 2...
I1009 19:23:38.205755  185144 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:38.206087  185144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
I1009 19:23:38.206479  185144 mustload.go:65] Loading cluster: functional-158523
I1009 19:23:38.207116  185144 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:23:38.207778  185144 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
I1009 19:23:38.227484  185144 host.go:66] Checking if "functional-158523" exists ...
I1009 19:23:38.227855  185144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1009 19:23:38.334129  185144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:38.321540841 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1009 19:23:38.334269  185144 api_server.go:166] Checking apiserver status ...
I1009 19:23:38.334320  185144 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1009 19:23:38.334388  185144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
I1009 19:23:38.358854  185144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
W1009 19:23:38.471236  185144 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1009 19:23:38.474044  185144 out.go:179] * The control-plane node functional-158523 apiserver is not running: (state=Stopped)
I1009 19:23:38.476646  185144 out.go:179]   To start a cluster, run: "minikube start -p functional-158523"

                                                
                                                
stdout: * The control-plane node functional-158523 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-158523"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-158523 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-158523 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-158523 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-158523 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-158523 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-158523 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-158523 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-158523 apply -f testdata/testsvc.yaml: exit status 1 (70.214369ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-158523 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (98.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1009 19:23:38.560298  141519 retry.go:31] will retry after 3.594038646s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-158523 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-158523 get svc nginx-svc: exit status 1 (54.601209ms)

                                                
                                                
** stderr ** 
	E1009 19:25:17.287238  192884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:25:17.288278  192884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:25:17.288962  192884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:25:17.290494  192884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 19:25:17.290836  192884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-158523 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (98.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image load --daemon kicbase/echo-server:functional-158523 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-158523" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image load --daemon kicbase/echo-server:functional-158523 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-158523" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-158523 /tmp/TestFunctionalparallelMountCmdany-port379358751/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760037823491782947" to /tmp/TestFunctionalparallelMountCmdany-port379358751/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760037823491782947" to /tmp/TestFunctionalparallelMountCmdany-port379358751/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760037823491782947" to /tmp/TestFunctionalparallelMountCmdany-port379358751/001/test-1760037823491782947
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.109353ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 19:23:43.791202  141519 retry.go:31] will retry after 456.653988ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  9 19:23 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  9 19:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  9 19:23 test-1760037823491782947
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh cat /mount-9p/test-1760037823491782947
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-158523 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-158523 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (48.574028ms)

                                                
                                                
** stderr ** 
	E1009 19:23:45.152428  188860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	error: unable to recognize "testdata/busybox-mount-test.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-158523 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (292.43436ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=46765)
	total 2
	-rw-r--r-- 1 docker docker 24 Oct  9 19:23 created-by-test
	-rw-r--r-- 1 docker docker 24 Oct  9 19:23 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Oct  9 19:23 test-1760037823491782947
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-158523 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-158523 /tmp/TestFunctionalparallelMountCmdany-port379358751/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-158523 /tmp/TestFunctionalparallelMountCmdany-port379358751/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port379358751/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:46765
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port379358751/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-158523 /tmp/TestFunctionalparallelMountCmdany-port379358751/001:/mount-9p --alsologtostderr -v=1] stderr:
I1009 19:23:43.540212  188167 out.go:360] Setting OutFile to fd 1 ...
I1009 19:23:43.540374  188167 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:43.540394  188167 out.go:374] Setting ErrFile to fd 2...
I1009 19:23:43.540400  188167 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:43.540690  188167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
I1009 19:23:43.540985  188167 mustload.go:65] Loading cluster: functional-158523
I1009 19:23:43.541367  188167 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:23:43.541798  188167 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
I1009 19:23:43.560898  188167 host.go:66] Checking if "functional-158523" exists ...
I1009 19:23:43.561178  188167 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1009 19:23:43.637080  188167 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-10-09 19:23:43.624221696 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1009 19:23:43.637286  188167 cli_runner.go:164] Run: docker network inspect functional-158523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1009 19:23:43.660426  188167 out.go:179] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port379358751/001 into VM as /mount-9p ...
I1009 19:23:43.662175  188167 out.go:179]   - Mount type:   9p
I1009 19:23:43.663641  188167 out.go:179]   - User ID:      docker
I1009 19:23:43.665130  188167 out.go:179]   - Group ID:     docker
I1009 19:23:43.666462  188167 out.go:179]   - Version:      9p2000.L
I1009 19:23:43.668049  188167 out.go:179]   - Message Size: 262144
I1009 19:23:43.669778  188167 out.go:179]   - Options:      map[]
I1009 19:23:43.671037  188167 out.go:179]   - Bind Address: 192.168.49.1:46765
I1009 19:23:43.672480  188167 out.go:179] * Userspace file server: 
I1009 19:23:43.672711  188167 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1009 19:23:43.672817  188167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
I1009 19:23:43.692796  188167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
I1009 19:23:43.799113  188167 mount.go:180] unmount for /mount-9p ran successfully
I1009 19:23:43.799163  188167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1009 19:23:43.808077  188167 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46765,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1009 19:23:43.850881  188167 main.go:125] stdlog: ufs.go:141 connected
I1009 19:23:43.851061  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tversion tag 65535 msize 262144 version '9P2000.L'
I1009 19:23:43.851115  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rversion tag 65535 msize 262144 version '9P2000'
I1009 19:23:43.851345  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1009 19:23:43.851433  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rattach tag 0 aqid (20fa088 ca6de402 'd')
I1009 19:23:43.851733  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 0
I1009 19:23:43.851873  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa088 ca6de402 'd') m d775 at 0 mt 1760037823 l 4096 t 0 d 0 ext )
I1009 19:23:43.853401  188167 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/.mount-process: {Name:mk1917fe854d79bd4a5986a3991bf4fc49dc92fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 19:23:43.853603  188167 mount.go:105] mount successful: ""
I1009 19:23:43.855441  188167 out.go:179] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port379358751/001 to /mount-9p
I1009 19:23:43.857220  188167 out.go:203] 
I1009 19:23:43.858642  188167 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1009 19:23:44.809422  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 0
I1009 19:23:44.809566  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa088 ca6de402 'd') m d775 at 0 mt 1760037823 l 4096 t 0 d 0 ext )
I1009 19:23:44.809901  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Twalk tag 0 fid 0 newfid 1 
I1009 19:23:44.809958  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rwalk tag 0 
I1009 19:23:44.810062  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Topen tag 0 fid 1 mode 0
I1009 19:23:44.810132  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Ropen tag 0 qid (20fa088 ca6de402 'd') iounit 0
I1009 19:23:44.810218  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 0
I1009 19:23:44.810320  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa088 ca6de402 'd') m d775 at 0 mt 1760037823 l 4096 t 0 d 0 ext )
I1009 19:23:44.810550  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tread tag 0 fid 1 offset 0 count 262120
I1009 19:23:44.810724  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rread tag 0 count 258
I1009 19:23:44.810843  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tread tag 0 fid 1 offset 258 count 261862
I1009 19:23:44.810873  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rread tag 0 count 0
I1009 19:23:44.810974  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tread tag 0 fid 1 offset 258 count 262120
I1009 19:23:44.811001  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rread tag 0 count 0
I1009 19:23:44.811107  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1009 19:23:44.811151  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rwalk tag 0 (20fa08a ca6de402 '') 
I1009 19:23:44.811270  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 2
I1009 19:23:44.811361  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa08a ca6de402 '') m 644 at 0 mt 1760037823 l 24 t 0 d 0 ext )
I1009 19:23:44.811512  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 2
I1009 19:23:44.811582  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa08a ca6de402 '') m 644 at 0 mt 1760037823 l 24 t 0 d 0 ext )
I1009 19:23:44.811728  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tclunk tag 0 fid 2
I1009 19:23:44.811780  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rclunk tag 0
I1009 19:23:44.811892  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Twalk tag 0 fid 0 newfid 2 0:'test-1760037823491782947' 
I1009 19:23:44.811928  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rwalk tag 0 (20fa08b ca6de402 '') 
I1009 19:23:44.812053  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 2
I1009 19:23:44.812140  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('test-1760037823491782947' 'jenkins' 'balintp' '' q (20fa08b ca6de402 '') m 644 at 0 mt 1760037823 l 24 t 0 d 0 ext )
I1009 19:23:44.812407  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 2
I1009 19:23:44.812500  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('test-1760037823491782947' 'jenkins' 'balintp' '' q (20fa08b ca6de402 '') m 644 at 0 mt 1760037823 l 24 t 0 d 0 ext )
I1009 19:23:44.812707  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tclunk tag 0 fid 2
I1009 19:23:44.812740  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rclunk tag 0
I1009 19:23:44.812888  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1009 19:23:44.812922  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rwalk tag 0 (20fa089 ca6de402 '') 
I1009 19:23:44.813025  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 2
I1009 19:23:44.813123  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa089 ca6de402 '') m 644 at 0 mt 1760037823 l 24 t 0 d 0 ext )
I1009 19:23:44.813237  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 2
I1009 19:23:44.813301  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa089 ca6de402 '') m 644 at 0 mt 1760037823 l 24 t 0 d 0 ext )
I1009 19:23:44.813431  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tclunk tag 0 fid 2
I1009 19:23:44.813472  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rclunk tag 0
I1009 19:23:44.813632  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tread tag 0 fid 1 offset 258 count 262120
I1009 19:23:44.813667  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rread tag 0 count 0
I1009 19:23:44.813806  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tclunk tag 0 fid 1
I1009 19:23:44.813850  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rclunk tag 0
I1009 19:23:45.091965  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Twalk tag 0 fid 0 newfid 1 0:'test-1760037823491782947' 
I1009 19:23:45.092041  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rwalk tag 0 (20fa08b ca6de402 '') 
I1009 19:23:45.092235  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 1
I1009 19:23:45.092353  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('test-1760037823491782947' 'jenkins' 'balintp' '' q (20fa08b ca6de402 '') m 644 at 0 mt 1760037823 l 24 t 0 d 0 ext )
I1009 19:23:45.092528  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Twalk tag 0 fid 1 newfid 2 
I1009 19:23:45.092569  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rwalk tag 0 
I1009 19:23:45.092700  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Topen tag 0 fid 2 mode 0
I1009 19:23:45.092756  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Ropen tag 0 qid (20fa08b ca6de402 '') iounit 0
I1009 19:23:45.092868  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 1
I1009 19:23:45.092969  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('test-1760037823491782947' 'jenkins' 'balintp' '' q (20fa08b ca6de402 '') m 644 at 0 mt 1760037823 l 24 t 0 d 0 ext )
I1009 19:23:45.093244  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tread tag 0 fid 2 offset 0 count 24
I1009 19:23:45.093285  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rread tag 0 count 24
I1009 19:23:45.093470  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tclunk tag 0 fid 2
I1009 19:23:45.093508  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rclunk tag 0
I1009 19:23:45.093634  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tclunk tag 0 fid 1
I1009 19:23:45.093663  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rclunk tag 0
I1009 19:23:45.436945  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 0
I1009 19:23:45.437103  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa088 ca6de402 'd') m d775 at 0 mt 1760037823 l 4096 t 0 d 0 ext )
I1009 19:23:45.437462  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Twalk tag 0 fid 0 newfid 1 
I1009 19:23:45.437532  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rwalk tag 0 
I1009 19:23:45.437686  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Topen tag 0 fid 1 mode 0
I1009 19:23:45.437747  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Ropen tag 0 qid (20fa088 ca6de402 'd') iounit 0
I1009 19:23:45.437920  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 0
I1009 19:23:45.438066  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa088 ca6de402 'd') m d775 at 0 mt 1760037823 l 4096 t 0 d 0 ext )
I1009 19:23:45.438397  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tread tag 0 fid 1 offset 0 count 262120
I1009 19:23:45.438582  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rread tag 0 count 258
I1009 19:23:45.438777  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tread tag 0 fid 1 offset 258 count 261862
I1009 19:23:45.438819  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rread tag 0 count 0
I1009 19:23:45.438946  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tread tag 0 fid 1 offset 258 count 262120
I1009 19:23:45.438973  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rread tag 0 count 0
I1009 19:23:45.439106  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1009 19:23:45.439158  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rwalk tag 0 (20fa08a ca6de402 '') 
I1009 19:23:45.439246  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 2
I1009 19:23:45.439313  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa08a ca6de402 '') m 644 at 0 mt 1760037823 l 24 t 0 d 0 ext )
I1009 19:23:45.439438  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 2
I1009 19:23:45.439521  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa08a ca6de402 '') m 644 at 0 mt 1760037823 l 24 t 0 d 0 ext )
I1009 19:23:45.439621  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tclunk tag 0 fid 2
I1009 19:23:45.439656  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rclunk tag 0
I1009 19:23:45.439793  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Twalk tag 0 fid 0 newfid 2 0:'test-1760037823491782947' 
I1009 19:23:45.439839  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rwalk tag 0 (20fa08b ca6de402 '') 
I1009 19:23:45.439943  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 2
I1009 19:23:45.440004  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('test-1760037823491782947' 'jenkins' 'balintp' '' q (20fa08b ca6de402 '') m 644 at 0 mt 1760037823 l 24 t 0 d 0 ext )
I1009 19:23:45.440106  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 2
I1009 19:23:45.440191  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('test-1760037823491782947' 'jenkins' 'balintp' '' q (20fa08b ca6de402 '') m 644 at 0 mt 1760037823 l 24 t 0 d 0 ext )
I1009 19:23:45.440328  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tclunk tag 0 fid 2
I1009 19:23:45.440367  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rclunk tag 0
I1009 19:23:45.440512  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1009 19:23:45.440552  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rwalk tag 0 (20fa089 ca6de402 '') 
I1009 19:23:45.440659  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 2
I1009 19:23:45.440743  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa089 ca6de402 '') m 644 at 0 mt 1760037823 l 24 t 0 d 0 ext )
I1009 19:23:45.440891  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tstat tag 0 fid 2
I1009 19:23:45.440994  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa089 ca6de402 '') m 644 at 0 mt 1760037823 l 24 t 0 d 0 ext )
I1009 19:23:45.441119  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tclunk tag 0 fid 2
I1009 19:23:45.441148  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rclunk tag 0
I1009 19:23:45.441304  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tread tag 0 fid 1 offset 258 count 262120
I1009 19:23:45.441335  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rread tag 0 count 0
I1009 19:23:45.441500  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tclunk tag 0 fid 1
I1009 19:23:45.441543  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rclunk tag 0
I1009 19:23:45.442709  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1009 19:23:45.442767  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rerror tag 0 ename 'file not found' ecode 0
I1009 19:23:45.727288  188167 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:55306 Tclunk tag 0 fid 0
I1009 19:23:45.727333  188167 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:55306 Rclunk tag 0
I1009 19:23:45.727676  188167 main.go:125] stdlog: ufs.go:147 disconnected
I1009 19:23:45.743213  188167 out.go:179] * Unmounting /mount-9p ...
I1009 19:23:45.744556  188167 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1009 19:23:45.752182  188167 mount.go:180] unmount for /mount-9p ran successfully
I1009 19:23:45.752276  188167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/.mount-process: {Name:mk1917fe854d79bd4a5986a3991bf4fc49dc92fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 19:23:45.754107  188167 out.go:203] 
W1009 19:23:45.755395  188167 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1009 19:23:45.756465  188167 out.go:203] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-158523
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image load --daemon kicbase/echo-server:functional-158523 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-158523" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image save kicbase/echo-server:functional-158523 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1009 19:23:46.434193  189616 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:23:46.434625  189616 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:46.434639  189616 out.go:374] Setting ErrFile to fd 2...
	I1009 19:23:46.434645  189616 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:46.434957  189616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:23:46.435738  189616 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:23:46.435875  189616 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:23:46.436333  189616 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
	I1009 19:23:46.455837  189616 ssh_runner.go:195] Run: systemctl --version
	I1009 19:23:46.455890  189616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
	I1009 19:23:46.473945  189616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
	I1009 19:23:46.576781  189616 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1009 19:23:46.576843  189616 cache_images.go:254] Failed to load cached images for "functional-158523": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1009 19:23:46.576864  189616 cache_images.go:266] failed pushing to: functional-158523

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-158523
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image save --daemon kicbase/echo-server:functional-158523 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-158523
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-158523: exit status 1 (18.428828ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-158523

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-158523

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (501.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1009 19:28:37.184748  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:28:37.191170  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:28:37.202539  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:28:37.224061  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:28:37.265526  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:28:37.347067  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:28:37.508645  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:28:37.830472  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:28:38.472582  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:28:39.754252  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:28:42.317286  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:28:47.438923  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:28:57.680493  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:29:18.162569  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:29:59.125023  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:31:21.050444  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:33:37.182713  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:34:04.892460  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (8m19.852799481s)

                                                
                                                
-- stdout --
	* [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:27:40.840216  194626 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:40.840464  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840473  194626 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:40.840478  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840669  194626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:27:40.841171  194626 out.go:368] Setting JSON to false
	I1009 19:27:40.842061  194626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4210,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:27:40.842167  194626 start.go:143] virtualization: kvm guest
	I1009 19:27:40.844410  194626 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:27:40.845759  194626 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:40.845801  194626 notify.go:221] Checking for updates...
	I1009 19:27:40.848757  194626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:40.850175  194626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:27:40.851491  194626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:27:40.852705  194626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:27:40.853892  194626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:40.855321  194626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:40.878925  194626 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:27:40.879089  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:40.938194  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.927865936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:40.938299  194626 docker.go:319] overlay module found
	I1009 19:27:40.940236  194626 out.go:179] * Using the docker driver based on user configuration
	I1009 19:27:40.941822  194626 start.go:309] selected driver: docker
	I1009 19:27:40.941843  194626 start.go:930] validating driver "docker" against <nil>
	I1009 19:27:40.941856  194626 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:40.942433  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:41.005454  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.995221815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:41.005685  194626 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 19:27:41.005966  194626 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:41.007942  194626 out.go:179] * Using Docker driver with root privileges
	I1009 19:27:41.009445  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:41.009493  194626 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:27:41.009501  194626 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:27:41.009585  194626 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 19:27:41.010949  194626 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:27:41.012190  194626 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:41.013322  194626 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:41.014388  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.014435  194626 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:27:41.014445  194626 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:41.014436  194626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:41.014539  194626 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:27:41.014554  194626 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:41.014900  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:41.014929  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json: {Name:mk248a27bef73ecdf1bd71857a83cb8ef52e81ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:41.034874  194626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:41.034900  194626 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:41.034920  194626 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:41.034961  194626 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:41.035085  194626 start.go:365] duration metric: took 102.16µs to acquireMachinesLock for "ha-898615"
	I1009 19:27:41.035119  194626 start.go:94] Provisioning new machine with config: &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:41.035201  194626 start.go:126] createHost starting for "" (driver="docker")
	I1009 19:27:41.037427  194626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:27:41.037659  194626 start.go:160] libmachine.API.Create for "ha-898615" (driver="docker")
	I1009 19:27:41.037695  194626 client.go:168] LocalClient.Create starting
	I1009 19:27:41.037764  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
	I1009 19:27:41.037804  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037821  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.037887  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
	I1009 19:27:41.037913  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037927  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.038348  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:27:41.055472  194626 cli_runner.go:211] docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:27:41.055548  194626 network_create.go:284] running [docker network inspect ha-898615] to gather additional debugging logs...
	I1009 19:27:41.055569  194626 cli_runner.go:164] Run: docker network inspect ha-898615
	W1009 19:27:41.074024  194626 cli_runner.go:211] docker network inspect ha-898615 returned with exit code 1
	I1009 19:27:41.074056  194626 network_create.go:287] error running [docker network inspect ha-898615]: docker network inspect ha-898615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-898615 not found
	I1009 19:27:41.074071  194626 network_create.go:289] output of [docker network inspect ha-898615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-898615 not found
	
	** /stderr **
	I1009 19:27:41.074158  194626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:41.091849  194626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a266e0}
	I1009 19:27:41.091899  194626 network_create.go:124] attempt to create docker network ha-898615 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 19:27:41.091955  194626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-898615 ha-898615
	I1009 19:27:41.153434  194626 network_create.go:108] docker network ha-898615 192.168.49.0/24 created
	I1009 19:27:41.153469  194626 kic.go:121] calculated static IP "192.168.49.2" for the "ha-898615" container
	I1009 19:27:41.153533  194626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:27:41.170560  194626 cli_runner.go:164] Run: docker volume create ha-898615 --label name.minikube.sigs.k8s.io=ha-898615 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:27:41.190645  194626 oci.go:103] Successfully created a docker volume ha-898615
	I1009 19:27:41.190737  194626 cli_runner.go:164] Run: docker run --rm --name ha-898615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --entrypoint /usr/bin/test -v ha-898615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:27:41.600938  194626 oci.go:107] Successfully prepared a docker volume ha-898615
	I1009 19:27:41.600967  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.600993  194626 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:27:41.601055  194626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:27:46.106521  194626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.505414435s)
	I1009 19:27:46.106562  194626 kic.go:203] duration metric: took 4.50556336s to extract preloaded images to volume ...
	W1009 19:27:46.106658  194626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 19:27:46.106697  194626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 19:27:46.106744  194626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:27:46.168307  194626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-898615 --name ha-898615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-898615 --network ha-898615 --ip 192.168.49.2 --volume ha-898615:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:27:46.438758  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Running}}
	I1009 19:27:46.461401  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.481444  194626 cli_runner.go:164] Run: docker exec ha-898615 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:27:46.527865  194626 oci.go:144] the created container "ha-898615" has a running status.
	I1009 19:27:46.527897  194626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa...
	I1009 19:27:46.644201  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:27:46.644249  194626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:27:46.674019  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.696195  194626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:27:46.696225  194626 kic_runner.go:114] Args: [docker exec --privileged ha-898615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:27:46.751886  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.774821  194626 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:46.774929  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.795147  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.795497  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.795517  194626 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:46.948463  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:46.948514  194626 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:27:46.948583  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.967551  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.967801  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.967821  194626 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:27:47.127119  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:47.127192  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.145629  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.145957  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.145990  194626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:47.294310  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:47.294342  194626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:27:47.294368  194626 ubuntu.go:190] setting up certificates
	I1009 19:27:47.294401  194626 provision.go:84] configureAuth start
	I1009 19:27:47.294454  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:47.312608  194626 provision.go:143] copyHostCerts
	I1009 19:27:47.312651  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312684  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:27:47.312731  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312817  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:27:47.312923  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.312952  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:27:47.312962  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.313014  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:27:47.313086  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313114  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:27:47.313124  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313163  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:27:47.313236  194626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:27:47.539620  194626 provision.go:177] copyRemoteCerts
	I1009 19:27:47.539697  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:47.539740  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.557819  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:47.662665  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:47.662781  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:47.683372  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:47.683457  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:47.701669  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:47.701736  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:47.720164  194626 provision.go:87] duration metric: took 425.744193ms to configureAuth
	I1009 19:27:47.720200  194626 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:47.720451  194626 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:47.720564  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.738437  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.738688  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.738714  194626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:48.000107  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:48.000134  194626 machine.go:96] duration metric: took 1.225290325s to provisionDockerMachine
	I1009 19:27:48.000148  194626 client.go:171] duration metric: took 6.962444548s to LocalClient.Create
	I1009 19:27:48.000174  194626 start.go:168] duration metric: took 6.962517267s to libmachine.API.Create "ha-898615"
	I1009 19:27:48.000183  194626 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:27:48.000199  194626 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:48.000266  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:48.000306  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.017815  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.123997  194626 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:48.127754  194626 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:48.127793  194626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:48.127806  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:27:48.127864  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:27:48.127968  194626 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:27:48.127983  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:27:48.128079  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:48.136280  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:48.157989  194626 start.go:297] duration metric: took 157.790225ms for postStartSetup
	I1009 19:27:48.158323  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.176302  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:48.176728  194626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:48.176785  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.195218  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.296952  194626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.301724  194626 start.go:129] duration metric: took 7.26650252s to createHost
	I1009 19:27:48.301754  194626 start.go:84] releasing machines lock for "ha-898615", held for 7.266653034s
	I1009 19:27:48.301837  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.319813  194626 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:48.319860  194626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.319866  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.319919  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.339000  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.339730  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.494727  194626 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:48.501535  194626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.537367  194626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.542466  194626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.542536  194626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.568823  194626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:27:48.568848  194626 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.568888  194626 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:27:48.568936  194626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.585765  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.598310  194626 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.598367  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.615925  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.634773  194626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:48.716369  194626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:48.805777  194626 docker.go:234] disabling docker service ...
	I1009 19:27:48.805849  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:48.826709  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:48.840263  194626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:48.923854  194626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.007492  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.020704  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.035117  194626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.035178  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.045691  194626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:27:49.045759  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.055054  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.064210  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.073237  194626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:49.081510  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.090399  194626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.104388  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.113513  194626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:49.121175  194626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:49.128977  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.201984  194626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:49.310640  194626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:49.310716  194626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:49.315028  194626 start.go:564] Will wait 60s for crictl version
	I1009 19:27:49.315098  194626 ssh_runner.go:195] Run: which crictl
	I1009 19:27:49.319134  194626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:49.344131  194626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:49.344210  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.374167  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.406880  194626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:49.408191  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:49.425730  194626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:49.430174  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.441269  194626 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:49.441374  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:49.441444  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.474537  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.474563  194626 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:49.474626  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.501408  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.501432  194626 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:49.501440  194626 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:49.501565  194626 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:49.501644  194626 ssh_runner.go:195] Run: crio config
	I1009 19:27:49.548225  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:49.548247  194626 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:27:49.548270  194626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:49.548295  194626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:49.548487  194626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:49.548516  194626 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:49.548561  194626 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:49.560569  194626 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:49.560666  194626 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:49.560713  194626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:49.568926  194626 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:49.569017  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:49.577516  194626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:49.590951  194626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:49.606324  194626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:27:49.618986  194626 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 19:27:49.633526  194626 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:49.637412  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.647415  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.727039  194626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:49.749827  194626 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:27:49.749848  194626 certs.go:195] generating shared ca certs ...
	I1009 19:27:49.749916  194626 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.750063  194626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:27:49.750105  194626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:27:49.750114  194626 certs.go:257] generating profile certs ...
	I1009 19:27:49.750166  194626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:27:49.750191  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt with IP's: []
	I1009 19:27:49.922286  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt ...
	I1009 19:27:49.922322  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt: {Name:mke5f0b5d846145d5885091c3fdef11a03b4705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923243  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key ...
	I1009 19:27:49.923266  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key: {Name:mkb5fe559278888df39f6f81eb44f11c5b40eebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923364  194626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38
	I1009 19:27:49.923404  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 19:27:50.174751  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 ...
	I1009 19:27:50.174785  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38: {Name:mk7158926fc69073b0db0c318bd9373ec8743788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.174962  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 ...
	I1009 19:27:50.174976  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38: {Name:mk3dc2ee28c7f607abddd15bf98fe46423492612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.175056  194626 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:27:50.175135  194626 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:27:50.175189  194626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:27:50.175204  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt with IP's: []
	I1009 19:27:50.225715  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt ...
	I1009 19:27:50.225750  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt: {Name:mk61103353c5c3a0cbf54b18a910b0c83f19ee7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.225929  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key ...
	I1009 19:27:50.225941  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key: {Name:mkbe3072f67f16772d02b0025253c832dee4c437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.226013  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.226031  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.226044  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.226054  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.226067  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.226077  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.226088  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.226099  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.226151  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:27:50.226183  194626 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.226195  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:27:50.226219  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.226240  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.226260  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:27:50.226298  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:50.226322  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.226336  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.226348  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.226853  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:50.245895  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:50.263570  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:50.281671  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:27:50.300223  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:27:50.317734  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:27:50.335066  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:50.352704  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:27:50.370437  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:27:50.389421  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:50.406983  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:27:50.425614  194626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:50.439040  194626 ssh_runner.go:195] Run: openssl version
	I1009 19:27:50.445304  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:50.454013  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457873  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457929  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.491581  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:50.500958  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:27:50.509597  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513479  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513541  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.548203  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:50.557261  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:27:50.565908  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570090  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570139  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.604463  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:50.613756  194626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:50.617456  194626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:27:50.617519  194626 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:50.617611  194626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:50.617699  194626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:50.651007  194626 cri.go:89] found id: ""
	I1009 19:27:50.651094  194626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:50.660625  194626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:27:50.669536  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:27:50.669585  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:27:50.678565  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:27:50.678583  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:27:50.678632  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:27:50.686673  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:27:50.686739  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:27:50.694481  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:27:50.702511  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:27:50.702571  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:27:50.711335  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.719367  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:27:50.719437  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.727079  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:27:50.735773  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:27:50.735828  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:27:50.743402  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:27:50.783476  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:27:50.783553  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:27:50.804988  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:27:50.805072  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:27:50.805120  194626 kubeadm.go:318] OS: Linux
	I1009 19:27:50.805170  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:27:50.805244  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:27:50.805294  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:27:50.805368  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:27:50.805464  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:27:50.805570  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:27:50.805644  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:27:50.805709  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:27:50.865026  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:27:50.865138  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:27:50.865228  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:27:50.872900  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:27:50.874725  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:27:50.874828  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:27:50.874946  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:27:51.069934  194626 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:27:51.167065  194626 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:27:51.280550  194626 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:27:51.481119  194626 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:27:51.788062  194626 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:27:51.788230  194626 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:51.872181  194626 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:27:51.872362  194626 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:52.084923  194626 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:27:52.283000  194626 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:27:52.633617  194626 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:27:52.633683  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:27:52.795142  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:27:52.946617  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:27:53.060032  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:27:53.125689  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:27:53.322626  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:27:53.323225  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:27:53.325508  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:27:53.328645  194626 out.go:252]   - Booting up control plane ...
	I1009 19:27:53.328749  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:27:53.328835  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:27:53.328914  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:27:53.343297  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:27:53.343474  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:27:53.350427  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:27:53.350584  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:27:53.350658  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:27:53.451355  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:27:53.451572  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:27:54.452237  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000953428s
	I1009 19:27:54.456489  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:27:54.456634  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:27:54.456766  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:27:54.456840  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:31:54.457899  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	I1009 19:31:54.458060  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	I1009 19:31:54.458160  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	I1009 19:31:54.458176  194626 kubeadm.go:318] 
	I1009 19:31:54.458302  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:31:54.458440  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:31:54.458603  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:31:54.458813  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:31:54.458922  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:31:54.459044  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:31:54.459063  194626 kubeadm.go:318] 
	I1009 19:31:54.462630  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:54.462803  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:31:54.463587  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1009 19:31:54.463708  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:31:54.463871  194626 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000953428s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000953428s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:31:54.463985  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:31:57.247677  194626 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.783657743s)
	I1009 19:31:57.247781  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:31:57.261935  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:31:57.261998  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:31:57.270455  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:31:57.270478  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:31:57.270524  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:31:57.278767  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:31:57.278835  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:31:57.286752  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:31:57.295053  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:31:57.295113  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:31:57.303048  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.311557  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:31:57.311631  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.319853  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:31:57.328199  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:31:57.328265  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:31:57.336006  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:31:57.396540  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:57.457937  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:36:00.178531  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:36:00.178718  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:36:00.182333  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:36:00.182408  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:36:00.182502  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:36:00.182552  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:36:00.182582  194626 kubeadm.go:318] OS: Linux
	I1009 19:36:00.182645  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:36:00.182724  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:36:00.182769  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:36:00.182812  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:36:00.182853  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:36:00.182902  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:36:00.182967  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:36:00.183022  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:36:00.183086  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:36:00.183241  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:36:00.183374  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:36:00.183474  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:36:00.185572  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:36:00.185640  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:36:00.185695  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:36:00.185782  194626 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:36:00.185857  194626 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:36:00.185922  194626 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:36:00.185977  194626 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:36:00.186037  194626 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:36:00.186100  194626 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:36:00.186172  194626 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:36:00.186259  194626 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:36:00.186320  194626 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:36:00.186413  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:36:00.186457  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:36:00.186503  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:36:00.186567  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:36:00.186661  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:36:00.186746  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:36:00.186865  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:36:00.186965  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:36:00.188328  194626 out.go:252]   - Booting up control plane ...
	I1009 19:36:00.188439  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:36:00.188511  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:36:00.188565  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:36:00.188647  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:36:00.188734  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:36:00.188835  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:36:00.188946  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:36:00.188994  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:36:00.189107  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:36:00.189207  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:36:00.189257  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.196157ms
	I1009 19:36:00.189351  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:36:00.189534  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:36:00.189618  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:36:00.189686  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:36:00.189753  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	I1009 19:36:00.189820  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	I1009 19:36:00.189897  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	I1009 19:36:00.189910  194626 kubeadm.go:318] 
	I1009 19:36:00.190035  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:36:00.190115  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:36:00.190192  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:36:00.190268  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:36:00.190333  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:36:00.190440  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:36:00.190476  194626 kubeadm.go:318] 
	I1009 19:36:00.190536  194626 kubeadm.go:402] duration metric: took 8m9.573023353s to StartCluster
	I1009 19:36:00.190620  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:36:00.190688  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:36:00.221163  194626 cri.go:89] found id: ""
	I1009 19:36:00.221200  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.221211  194626 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:36:00.221218  194626 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:36:00.221282  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:36:00.248439  194626 cri.go:89] found id: ""
	I1009 19:36:00.248473  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.248486  194626 logs.go:284] No container was found matching "etcd"
	I1009 19:36:00.248498  194626 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:36:00.248564  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:36:00.275751  194626 cri.go:89] found id: ""
	I1009 19:36:00.275781  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.275792  194626 logs.go:284] No container was found matching "coredns"
	I1009 19:36:00.275801  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:36:00.275868  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:36:00.304183  194626 cri.go:89] found id: ""
	I1009 19:36:00.304218  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.304227  194626 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:36:00.304233  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:36:00.304286  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:36:00.333120  194626 cri.go:89] found id: ""
	I1009 19:36:00.333154  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.333164  194626 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:36:00.333171  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:36:00.333221  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:36:00.362504  194626 cri.go:89] found id: ""
	I1009 19:36:00.362527  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.362536  194626 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:36:00.362542  194626 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:36:00.362602  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:36:00.391912  194626 cri.go:89] found id: ""
	I1009 19:36:00.391939  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.391949  194626 logs.go:284] No container was found matching "kindnet"
	I1009 19:36:00.391964  194626 logs.go:123] Gathering logs for kubelet ...
	I1009 19:36:00.391982  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:36:00.460680  194626 logs.go:123] Gathering logs for dmesg ...
	I1009 19:36:00.460722  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:36:00.473600  194626 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:36:00.473634  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:36:00.538515  194626 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:36:00.538542  194626 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:36:00.538556  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:36:00.601750  194626 logs.go:123] Gathering logs for container status ...
	I1009 19:36:00.601793  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:36:00.631755  194626 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:36:00.631818  194626 out.go:285] * 
	* 
	W1009 19:36:00.631894  194626 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.631914  194626 out.go:285] * 
	* 
	W1009 19:36:00.633671  194626 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:36:00.637455  194626 out.go:203] 
	W1009 19:36:00.638829  194626 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.638866  194626 out.go:285] * 
	* 
	I1009 19:36:00.640615  194626 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-898615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:27:46.220087387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39eaa0a5ee22c7c1e0e0329f3f944afa2ddef6cd571ebd9f0aa050805b81a54f",
	            "SandboxKey": "/var/run/docker/netns/39eaa0a5ee22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:31:dc:da:08:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "7930ac5db0626d63671f2a77753202141442de70603e1b4acf6213e0bd34944d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 6 (322.932773ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:36:01.019070  199766 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-158523 image save --daemon kicbase/echo-server:functional-158523 --alsologtostderr                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh            │ functional-158523 ssh findmnt -T /mount-9p | grep 9p                                                              │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh            │ functional-158523 ssh -- ls -la /mount-9p                                                                         │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh            │ functional-158523 ssh sudo umount -f /mount-9p                                                                    │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ mount          │ -p functional-158523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup157564093/001:/mount1 --alsologtostderr -v=1 │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ ssh            │ functional-158523 ssh findmnt -T /mount1                                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ mount          │ -p functional-158523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup157564093/001:/mount2 --alsologtostderr -v=1 │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ mount          │ -p functional-158523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup157564093/001:/mount3 --alsologtostderr -v=1 │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ license        │                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ update-context │ functional-158523 update-context --alsologtostderr -v=2                                                           │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh            │ functional-158523 ssh findmnt -T /mount1                                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ update-context │ functional-158523 update-context --alsologtostderr -v=2                                                           │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ update-context │ functional-158523 update-context --alsologtostderr -v=2                                                           │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh            │ functional-158523 ssh findmnt -T /mount2                                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh            │ functional-158523 ssh findmnt -T /mount3                                                                          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ mount          │ -p functional-158523 --kill=true                                                                                  │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ image          │ functional-158523 image ls --format short --alsologtostderr                                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image          │ functional-158523 image ls --format yaml --alsologtostderr                                                        │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh            │ functional-158523 ssh pgrep buildkitd                                                                             │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ image          │ functional-158523 image ls --format json --alsologtostderr                                                        │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image          │ functional-158523 image build -t localhost/my-image:functional-158523 testdata/build --alsologtostderr            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image          │ functional-158523 image ls --format table --alsologtostderr                                                       │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image          │ functional-158523 image ls                                                                                        │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ delete         │ -p functional-158523                                                                                              │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │ 09 Oct 25 19:27 UTC │
	│ start          │ ha-898615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio   │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:27:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:27:40.840216  194626 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:40.840464  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840473  194626 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:40.840478  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840669  194626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:27:40.841171  194626 out.go:368] Setting JSON to false
	I1009 19:27:40.842061  194626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4210,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:27:40.842167  194626 start.go:143] virtualization: kvm guest
	I1009 19:27:40.844410  194626 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:27:40.845759  194626 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:40.845801  194626 notify.go:221] Checking for updates...
	I1009 19:27:40.848757  194626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:40.850175  194626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:27:40.851491  194626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:27:40.852705  194626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:27:40.853892  194626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:40.855321  194626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:40.878925  194626 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:27:40.879089  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:40.938194  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.927865936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:40.938299  194626 docker.go:319] overlay module found
	I1009 19:27:40.940236  194626 out.go:179] * Using the docker driver based on user configuration
	I1009 19:27:40.941822  194626 start.go:309] selected driver: docker
	I1009 19:27:40.941843  194626 start.go:930] validating driver "docker" against <nil>
	I1009 19:27:40.941856  194626 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:40.942433  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:41.005454  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.995221815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:41.005685  194626 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 19:27:41.005966  194626 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:41.007942  194626 out.go:179] * Using Docker driver with root privileges
	I1009 19:27:41.009445  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:41.009493  194626 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:27:41.009501  194626 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:27:41.009585  194626 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 19:27:41.010949  194626 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:27:41.012190  194626 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:41.013322  194626 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:41.014388  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.014435  194626 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:27:41.014445  194626 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:41.014436  194626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:41.014539  194626 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:27:41.014554  194626 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:41.014900  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:41.014929  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json: {Name:mk248a27bef73ecdf1bd71857a83cb8ef52e81ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:41.034874  194626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:41.034900  194626 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:41.034920  194626 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:41.034961  194626 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:41.035085  194626 start.go:365] duration metric: took 102.16µs to acquireMachinesLock for "ha-898615"
	I1009 19:27:41.035119  194626 start.go:94] Provisioning new machine with config: &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:41.035201  194626 start.go:126] createHost starting for "" (driver="docker")
	I1009 19:27:41.037427  194626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:27:41.037659  194626 start.go:160] libmachine.API.Create for "ha-898615" (driver="docker")
	I1009 19:27:41.037695  194626 client.go:168] LocalClient.Create starting
	I1009 19:27:41.037764  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
	I1009 19:27:41.037804  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037821  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.037887  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
	I1009 19:27:41.037913  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037927  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.038348  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:27:41.055472  194626 cli_runner.go:211] docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:27:41.055548  194626 network_create.go:284] running [docker network inspect ha-898615] to gather additional debugging logs...
	I1009 19:27:41.055569  194626 cli_runner.go:164] Run: docker network inspect ha-898615
	W1009 19:27:41.074024  194626 cli_runner.go:211] docker network inspect ha-898615 returned with exit code 1
	I1009 19:27:41.074056  194626 network_create.go:287] error running [docker network inspect ha-898615]: docker network inspect ha-898615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-898615 not found
	I1009 19:27:41.074071  194626 network_create.go:289] output of [docker network inspect ha-898615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-898615 not found
	
	** /stderr **
	I1009 19:27:41.074158  194626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:41.091849  194626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a266e0}
	I1009 19:27:41.091899  194626 network_create.go:124] attempt to create docker network ha-898615 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 19:27:41.091955  194626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-898615 ha-898615
	I1009 19:27:41.153434  194626 network_create.go:108] docker network ha-898615 192.168.49.0/24 created
	I1009 19:27:41.153469  194626 kic.go:121] calculated static IP "192.168.49.2" for the "ha-898615" container
	I1009 19:27:41.153533  194626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:27:41.170560  194626 cli_runner.go:164] Run: docker volume create ha-898615 --label name.minikube.sigs.k8s.io=ha-898615 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:27:41.190645  194626 oci.go:103] Successfully created a docker volume ha-898615
	I1009 19:27:41.190737  194626 cli_runner.go:164] Run: docker run --rm --name ha-898615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --entrypoint /usr/bin/test -v ha-898615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:27:41.600938  194626 oci.go:107] Successfully prepared a docker volume ha-898615
	I1009 19:27:41.600967  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.600993  194626 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:27:41.601055  194626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:27:46.106521  194626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.505414435s)
	I1009 19:27:46.106562  194626 kic.go:203] duration metric: took 4.50556336s to extract preloaded images to volume ...
	W1009 19:27:46.106658  194626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 19:27:46.106697  194626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 19:27:46.106744  194626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:27:46.168307  194626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-898615 --name ha-898615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-898615 --network ha-898615 --ip 192.168.49.2 --volume ha-898615:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:27:46.438758  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Running}}
	I1009 19:27:46.461401  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.481444  194626 cli_runner.go:164] Run: docker exec ha-898615 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:27:46.527865  194626 oci.go:144] the created container "ha-898615" has a running status.
	I1009 19:27:46.527897  194626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa...
	I1009 19:27:46.644201  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:27:46.644249  194626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:27:46.674019  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.696195  194626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:27:46.696225  194626 kic_runner.go:114] Args: [docker exec --privileged ha-898615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:27:46.751886  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.774821  194626 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:46.774929  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.795147  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.795497  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.795517  194626 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:46.948463  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:46.948514  194626 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:27:46.948583  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.967551  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.967801  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.967821  194626 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:27:47.127119  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:47.127192  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.145629  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.145957  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.145990  194626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:47.294310  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:47.294342  194626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:27:47.294368  194626 ubuntu.go:190] setting up certificates
	I1009 19:27:47.294401  194626 provision.go:84] configureAuth start
	I1009 19:27:47.294454  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:47.312608  194626 provision.go:143] copyHostCerts
	I1009 19:27:47.312651  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312684  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:27:47.312731  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312817  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:27:47.312923  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.312952  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:27:47.312962  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.313014  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:27:47.313086  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313114  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:27:47.313124  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313163  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:27:47.313236  194626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:27:47.539620  194626 provision.go:177] copyRemoteCerts
	I1009 19:27:47.539697  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:47.539740  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.557819  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:47.662665  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:47.662781  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:47.683372  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:47.683457  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:47.701669  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:47.701736  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:47.720164  194626 provision.go:87] duration metric: took 425.744193ms to configureAuth
	I1009 19:27:47.720200  194626 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:47.720451  194626 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:47.720564  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.738437  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.738688  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.738714  194626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:48.000107  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:48.000134  194626 machine.go:96] duration metric: took 1.225290325s to provisionDockerMachine
	I1009 19:27:48.000148  194626 client.go:171] duration metric: took 6.962444548s to LocalClient.Create
	I1009 19:27:48.000174  194626 start.go:168] duration metric: took 6.962517267s to libmachine.API.Create "ha-898615"
	I1009 19:27:48.000183  194626 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:27:48.000199  194626 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:48.000266  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:48.000306  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.017815  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.123997  194626 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:48.127754  194626 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:48.127793  194626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:48.127806  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:27:48.127864  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:27:48.127968  194626 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:27:48.127983  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:27:48.128079  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:48.136280  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:48.157989  194626 start.go:297] duration metric: took 157.790225ms for postStartSetup
	I1009 19:27:48.158323  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.176302  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:48.176728  194626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:48.176785  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.195218  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.296952  194626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.301724  194626 start.go:129] duration metric: took 7.26650252s to createHost
	I1009 19:27:48.301754  194626 start.go:84] releasing machines lock for "ha-898615", held for 7.266653034s
	I1009 19:27:48.301837  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.319813  194626 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:48.319860  194626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.319866  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.319919  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.339000  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.339730  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.494727  194626 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:48.501535  194626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.537367  194626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.542466  194626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.542536  194626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.568823  194626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:27:48.568848  194626 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.568888  194626 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:27:48.568936  194626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.585765  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.598310  194626 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.598367  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.615925  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.634773  194626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:48.716369  194626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:48.805777  194626 docker.go:234] disabling docker service ...
	I1009 19:27:48.805849  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:48.826709  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:48.840263  194626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:48.923854  194626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.007492  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.020704  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.035117  194626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.035178  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.045691  194626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:27:49.045759  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.055054  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.064210  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.073237  194626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:49.081510  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.090399  194626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.104388  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.113513  194626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:49.121175  194626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:49.128977  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.201984  194626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:49.310640  194626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:49.310716  194626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:49.315028  194626 start.go:564] Will wait 60s for crictl version
	I1009 19:27:49.315098  194626 ssh_runner.go:195] Run: which crictl
	I1009 19:27:49.319134  194626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:49.344131  194626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:49.344210  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.374167  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.406880  194626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:49.408191  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:49.425730  194626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:49.430174  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.441269  194626 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:49.441374  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:49.441444  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.474537  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.474563  194626 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:49.474626  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.501408  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.501432  194626 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:49.501440  194626 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:49.501565  194626 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:49.501644  194626 ssh_runner.go:195] Run: crio config
	I1009 19:27:49.548225  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:49.548247  194626 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:27:49.548270  194626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:49.548295  194626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:49.548487  194626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:49.548516  194626 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:49.548561  194626 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:49.560569  194626 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:49.560666  194626 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:49.560713  194626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:49.568926  194626 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:49.569017  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:49.577516  194626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:49.590951  194626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:49.606324  194626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:27:49.618986  194626 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 19:27:49.633526  194626 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:49.637412  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.647415  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.727039  194626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:49.749827  194626 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:27:49.749848  194626 certs.go:195] generating shared ca certs ...
	I1009 19:27:49.749916  194626 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.750063  194626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:27:49.750105  194626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:27:49.750114  194626 certs.go:257] generating profile certs ...
	I1009 19:27:49.750166  194626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:27:49.750191  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt with IP's: []
	I1009 19:27:49.922286  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt ...
	I1009 19:27:49.922322  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt: {Name:mke5f0b5d846145d5885091c3fdef11a03b4705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923243  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key ...
	I1009 19:27:49.923266  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key: {Name:mkb5fe559278888df39f6f81eb44f11c5b40eebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923364  194626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38
	I1009 19:27:49.923404  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 19:27:50.174751  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 ...
	I1009 19:27:50.174785  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38: {Name:mk7158926fc69073b0db0c318bd9373ec8743788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.174962  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 ...
	I1009 19:27:50.174976  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38: {Name:mk3dc2ee28c7f607abddd15bf98fe46423492612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.175056  194626 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:27:50.175135  194626 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:27:50.175189  194626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:27:50.175204  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt with IP's: []
	I1009 19:27:50.225715  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt ...
	I1009 19:27:50.225750  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt: {Name:mk61103353c5c3a0cbf54b18a910b0c83f19ee7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.225929  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key ...
	I1009 19:27:50.225941  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key: {Name:mkbe3072f67f16772d02b0025253c832dee4c437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.226013  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.226031  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.226044  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.226054  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.226067  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.226077  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.226088  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.226099  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.226151  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:27:50.226183  194626 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.226195  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:27:50.226219  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.226240  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.226260  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:27:50.226298  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:50.226322  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.226336  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.226348  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.226853  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:50.245895  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:50.263570  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:50.281671  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:27:50.300223  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:27:50.317734  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:27:50.335066  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:50.352704  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:27:50.370437  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:27:50.389421  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:50.406983  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:27:50.425614  194626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:50.439040  194626 ssh_runner.go:195] Run: openssl version
	I1009 19:27:50.445304  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:50.454013  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457873  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457929  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.491581  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:50.500958  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:27:50.509597  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513479  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513541  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.548203  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:50.557261  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:27:50.565908  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570090  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570139  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.604463  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:50.613756  194626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:50.617456  194626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:27:50.617519  194626 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:50.617611  194626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:50.617699  194626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:50.651007  194626 cri.go:89] found id: ""
	I1009 19:27:50.651094  194626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:50.660625  194626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:27:50.669536  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:27:50.669585  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:27:50.678565  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:27:50.678583  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:27:50.678632  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:27:50.686673  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:27:50.686739  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:27:50.694481  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:27:50.702511  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:27:50.702571  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:27:50.711335  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.719367  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:27:50.719437  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.727079  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:27:50.735773  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:27:50.735828  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:27:50.743402  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:27:50.783476  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:27:50.783553  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:27:50.804988  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:27:50.805072  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:27:50.805120  194626 kubeadm.go:318] OS: Linux
	I1009 19:27:50.805170  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:27:50.805244  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:27:50.805294  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:27:50.805368  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:27:50.805464  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:27:50.805570  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:27:50.805644  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:27:50.805709  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:27:50.865026  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:27:50.865138  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:27:50.865228  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:27:50.872900  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:27:50.874725  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:27:50.874828  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:27:50.874946  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:27:51.069934  194626 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:27:51.167065  194626 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:27:51.280550  194626 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:27:51.481119  194626 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:27:51.788062  194626 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:27:51.788230  194626 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:51.872181  194626 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:27:51.872362  194626 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:52.084923  194626 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:27:52.283000  194626 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:27:52.633617  194626 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:27:52.633683  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:27:52.795142  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:27:52.946617  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:27:53.060032  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:27:53.125689  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:27:53.322626  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:27:53.323225  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:27:53.325508  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:27:53.328645  194626 out.go:252]   - Booting up control plane ...
	I1009 19:27:53.328749  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:27:53.328835  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:27:53.328914  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:27:53.343297  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:27:53.343474  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:27:53.350427  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:27:53.350584  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:27:53.350658  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:27:53.451355  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:27:53.451572  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:27:54.452237  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000953428s
	I1009 19:27:54.456489  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:27:54.456634  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:27:54.456766  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:27:54.456840  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:31:54.457899  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	I1009 19:31:54.458060  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	I1009 19:31:54.458160  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	I1009 19:31:54.458176  194626 kubeadm.go:318] 
	I1009 19:31:54.458302  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:31:54.458440  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:31:54.458603  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:31:54.458813  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:31:54.458922  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:31:54.459044  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:31:54.459063  194626 kubeadm.go:318] 
	I1009 19:31:54.462630  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:54.462803  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:31:54.463587  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1009 19:31:54.463708  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:31:54.463871  194626 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000953428s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:31:54.463985  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:31:57.247677  194626 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.783657743s)
	I1009 19:31:57.247781  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:31:57.261935  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:31:57.261998  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:31:57.270455  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:31:57.270478  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:31:57.270524  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:31:57.278767  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:31:57.278835  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:31:57.286752  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:31:57.295053  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:31:57.295113  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:31:57.303048  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.311557  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:31:57.311631  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.319853  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:31:57.328199  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:31:57.328265  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:31:57.336006  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:31:57.396540  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:57.457937  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:36:00.178531  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:36:00.178718  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:36:00.182333  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:36:00.182408  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:36:00.182502  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:36:00.182552  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:36:00.182582  194626 kubeadm.go:318] OS: Linux
	I1009 19:36:00.182645  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:36:00.182724  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:36:00.182769  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:36:00.182812  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:36:00.182853  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:36:00.182902  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:36:00.182967  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:36:00.183022  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:36:00.183086  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:36:00.183241  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:36:00.183374  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:36:00.183474  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:36:00.185572  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:36:00.185640  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:36:00.185695  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:36:00.185782  194626 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:36:00.185857  194626 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:36:00.185922  194626 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:36:00.185977  194626 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:36:00.186037  194626 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:36:00.186100  194626 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:36:00.186172  194626 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:36:00.186259  194626 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:36:00.186320  194626 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:36:00.186413  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:36:00.186457  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:36:00.186503  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:36:00.186567  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:36:00.186661  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:36:00.186746  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:36:00.186865  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:36:00.186965  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:36:00.188328  194626 out.go:252]   - Booting up control plane ...
	I1009 19:36:00.188439  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:36:00.188511  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:36:00.188565  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:36:00.188647  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:36:00.188734  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:36:00.188835  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:36:00.188946  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:36:00.188994  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:36:00.189107  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:36:00.189207  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:36:00.189257  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.196157ms
	I1009 19:36:00.189351  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:36:00.189534  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:36:00.189618  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:36:00.189686  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:36:00.189753  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	I1009 19:36:00.189820  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	I1009 19:36:00.189897  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	I1009 19:36:00.189910  194626 kubeadm.go:318] 
	I1009 19:36:00.190035  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:36:00.190115  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:36:00.190192  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:36:00.190268  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:36:00.190333  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:36:00.190440  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:36:00.190476  194626 kubeadm.go:318] 
	I1009 19:36:00.190536  194626 kubeadm.go:402] duration metric: took 8m9.573023353s to StartCluster
	I1009 19:36:00.190620  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:36:00.190688  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:36:00.221163  194626 cri.go:89] found id: ""
	I1009 19:36:00.221200  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.221211  194626 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:36:00.221218  194626 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:36:00.221282  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:36:00.248439  194626 cri.go:89] found id: ""
	I1009 19:36:00.248473  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.248486  194626 logs.go:284] No container was found matching "etcd"
	I1009 19:36:00.248498  194626 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:36:00.248564  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:36:00.275751  194626 cri.go:89] found id: ""
	I1009 19:36:00.275781  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.275792  194626 logs.go:284] No container was found matching "coredns"
	I1009 19:36:00.275801  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:36:00.275868  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:36:00.304183  194626 cri.go:89] found id: ""
	I1009 19:36:00.304218  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.304227  194626 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:36:00.304233  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:36:00.304286  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:36:00.333120  194626 cri.go:89] found id: ""
	I1009 19:36:00.333154  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.333164  194626 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:36:00.333171  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:36:00.333221  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:36:00.362504  194626 cri.go:89] found id: ""
	I1009 19:36:00.362527  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.362536  194626 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:36:00.362542  194626 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:36:00.362602  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:36:00.391912  194626 cri.go:89] found id: ""
	I1009 19:36:00.391939  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.391949  194626 logs.go:284] No container was found matching "kindnet"
	I1009 19:36:00.391964  194626 logs.go:123] Gathering logs for kubelet ...
	I1009 19:36:00.391982  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:36:00.460680  194626 logs.go:123] Gathering logs for dmesg ...
	I1009 19:36:00.460722  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:36:00.473600  194626 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:36:00.473634  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:36:00.538515  194626 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:36:00.538542  194626 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:36:00.538556  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:36:00.601750  194626 logs.go:123] Gathering logs for container status ...
	I1009 19:36:00.601793  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:36:00.631755  194626 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:36:00.631818  194626 out.go:285] * 
	W1009 19:36:00.631894  194626 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.631914  194626 out.go:285] * 
	W1009 19:36:00.633671  194626 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:36:00.637455  194626 out.go:203] 
	W1009 19:36:00.638829  194626 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.638866  194626 out.go:285] * 
	I1009 19:36:00.640615  194626 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:35:54 ha-898615 crio[777]: time="2025-10-09T19:35:54.008516701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:35:54 ha-898615 crio[777]: time="2025-10-09T19:35:54.009151454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:35:54 ha-898615 crio[777]: time="2025-10-09T19:35:54.010511601Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:35:54 ha-898615 crio[777]: time="2025-10-09T19:35:54.011004015Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:35:54 ha-898615 crio[777]: time="2025-10-09T19:35:54.029912815Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a018e560-21b5-45e4-a82d-0307ed735082 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:54 ha-898615 crio[777]: time="2025-10-09T19:35:54.030770804Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=96b17c13-bba4-4798-9e05-011c08ecc776 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:54 ha-898615 crio[777]: time="2025-10-09T19:35:54.031503458Z" level=info msg="createCtr: deleting container ID 9839af75c43bc403442f3312e03fc8d28fc014b2a92a0d61c5a85c69cb0c5033 from idIndex" id=a018e560-21b5-45e4-a82d-0307ed735082 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:54 ha-898615 crio[777]: time="2025-10-09T19:35:54.031547629Z" level=info msg="createCtr: removing container 9839af75c43bc403442f3312e03fc8d28fc014b2a92a0d61c5a85c69cb0c5033" id=a018e560-21b5-45e4-a82d-0307ed735082 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:54 ha-898615 crio[777]: time="2025-10-09T19:35:54.031594433Z" level=info msg="createCtr: deleting container 9839af75c43bc403442f3312e03fc8d28fc014b2a92a0d61c5a85c69cb0c5033 from storage" id=a018e560-21b5-45e4-a82d-0307ed735082 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:54 ha-898615 crio[777]: time="2025-10-09T19:35:54.032238516Z" level=info msg="createCtr: deleting container ID 227e551689bd95c928991efb37dfddadfcdc5ec90634903db49914444e5b9557 from idIndex" id=96b17c13-bba4-4798-9e05-011c08ecc776 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:54 ha-898615 crio[777]: time="2025-10-09T19:35:54.032274896Z" level=info msg="createCtr: removing container 227e551689bd95c928991efb37dfddadfcdc5ec90634903db49914444e5b9557" id=96b17c13-bba4-4798-9e05-011c08ecc776 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:54 ha-898615 crio[777]: time="2025-10-09T19:35:54.032305727Z" level=info msg="createCtr: deleting container 227e551689bd95c928991efb37dfddadfcdc5ec90634903db49914444e5b9557 from storage" id=96b17c13-bba4-4798-9e05-011c08ecc776 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:54 ha-898615 crio[777]: time="2025-10-09T19:35:54.035203974Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=a018e560-21b5-45e4-a82d-0307ed735082 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:54 ha-898615 crio[777]: time="2025-10-09T19:35:54.035618816Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-898615_kube-system_aad558bc9f2efde8d3c90d373798de18_0" id=96b17c13-bba4-4798-9e05-011c08ecc776 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:56 ha-898615 crio[777]: time="2025-10-09T19:35:56.001269558Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5af536bc-36cb-49a8-be48-4b9073721371 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:35:56 ha-898615 crio[777]: time="2025-10-09T19:35:56.00232534Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=52fc1f03-b3ed-4d71-b4f6-1eeaa373d2d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:35:56 ha-898615 crio[777]: time="2025-10-09T19:35:56.003489512Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-898615/kube-controller-manager" id=8322104f-c28a-4267-be31-f6006dfc515a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:56 ha-898615 crio[777]: time="2025-10-09T19:35:56.003831144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:35:56 ha-898615 crio[777]: time="2025-10-09T19:35:56.007424652Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:35:56 ha-898615 crio[777]: time="2025-10-09T19:35:56.007878552Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:35:56 ha-898615 crio[777]: time="2025-10-09T19:35:56.024082861Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8322104f-c28a-4267-be31-f6006dfc515a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:56 ha-898615 crio[777]: time="2025-10-09T19:35:56.02554206Z" level=info msg="createCtr: deleting container ID e7337dcd5100278117f3545affbb54e2ae7421d21cfd7a5991f434484138a44b from idIndex" id=8322104f-c28a-4267-be31-f6006dfc515a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:56 ha-898615 crio[777]: time="2025-10-09T19:35:56.025588989Z" level=info msg="createCtr: removing container e7337dcd5100278117f3545affbb54e2ae7421d21cfd7a5991f434484138a44b" id=8322104f-c28a-4267-be31-f6006dfc515a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:56 ha-898615 crio[777]: time="2025-10-09T19:35:56.025667607Z" level=info msg="createCtr: deleting container e7337dcd5100278117f3545affbb54e2ae7421d21cfd7a5991f434484138a44b from storage" id=8322104f-c28a-4267-be31-f6006dfc515a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:56 ha-898615 crio[777]: time="2025-10-09T19:35:56.028013267Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=8322104f-c28a-4267-be31-f6006dfc515a name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:36:01.640503    2696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:01.641131    2696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:01.642771    2696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:01.643287    2696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:01.644280    2696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:36:01 up  1:18,  0 user,  load average: 0.01, 0.05, 1.90
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:35:54 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:35:54 ha-898615 kubelet[1937]:  > podSandboxID="043d27dc8aae857d8a42667dfed1b409b78957e0ac42335f6d23fbc5540aedfd"
	Oct 09 19:35:54 ha-898615 kubelet[1937]: E1009 19:35:54.035790    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:35:54 ha-898615 kubelet[1937]:         container etcd start failed in pod etcd-ha-898615_kube-system(2d43b86ac03e93e11f35c923e2560103): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:35:54 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:35:54 ha-898615 kubelet[1937]: E1009 19:35:54.035836    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-898615" podUID="2d43b86ac03e93e11f35c923e2560103"
	Oct 09 19:35:54 ha-898615 kubelet[1937]: E1009 19:35:54.035933    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:35:54 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:35:54 ha-898615 kubelet[1937]:  > podSandboxID="d0dc08f9b3192b73d8d0a9663be3c0a1eb6c98f898ae3f73ccefd6abc80c6b8e"
	Oct 09 19:35:54 ha-898615 kubelet[1937]: E1009 19:35:54.036052    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:35:54 ha-898615 kubelet[1937]:         container kube-apiserver start failed in pod kube-apiserver-ha-898615_kube-system(aad558bc9f2efde8d3c90d373798de18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:35:54 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:35:54 ha-898615 kubelet[1937]: E1009 19:35:54.037241    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-898615" podUID="aad558bc9f2efde8d3c90d373798de18"
	Oct 09 19:35:56 ha-898615 kubelet[1937]: E1009 19:35:56.000585    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:35:56 ha-898615 kubelet[1937]: E1009 19:35:56.028448    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:35:56 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:35:56 ha-898615 kubelet[1937]:  > podSandboxID="da9ccac0760165455089feb0a833fa92808edbd25f27148d0daf896a8d20b03b"
	Oct 09 19:35:56 ha-898615 kubelet[1937]: E1009 19:35:56.028608    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:35:56 ha-898615 kubelet[1937]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:35:56 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:35:56 ha-898615 kubelet[1937]: E1009 19:35:56.028655    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	Oct 09 19:35:56 ha-898615 kubelet[1937]: E1009 19:35:56.627317    1937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:35:56 ha-898615 kubelet[1937]: I1009 19:35:56.784764    1937 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:35:56 ha-898615 kubelet[1937]: E1009 19:35:56.785226    1937 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:36:00 ha-898615 kubelet[1937]: E1009 19:36:00.015441    1937 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 6 (311.520173ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:36:02.042289  200100 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (501.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (91.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (99.238893ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-898615" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- rollout status deployment/busybox: exit status 1 (99.408902ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.800577ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 19:36:02.353563  141519 retry.go:31] will retry after 641.96903ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.621475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 19:36:03.094315  141519 retry.go:31] will retry after 1.105266974s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.753164ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 19:36:04.299547  141519 retry.go:31] will retry after 3.242842486s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.211675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 19:36:07.643109  141519 retry.go:31] will retry after 4.82267898s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.294235ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 19:36:12.568576  141519 retry.go:31] will retry after 5.436980716s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.65897ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 19:36:18.104963  141519 retry.go:31] will retry after 4.628561494s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.803317ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 19:36:22.836833  141519 retry.go:31] will retry after 13.260997841s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.501839ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 19:36:36.203863  141519 retry.go:31] will retry after 13.379661367s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.604486ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 19:36:49.684801  141519 retry.go:31] will retry after 16.730790709s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.163454ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 19:37:06.523847  141519 retry.go:31] will retry after 25.55009987s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.758849ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (94.675511ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- exec  -- nslookup kubernetes.io: exit status 1 (97.102641ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- exec  -- nslookup kubernetes.default: exit status 1 (97.033189ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (98.234027ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:27:46.220087387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39eaa0a5ee22c7c1e0e0329f3f944afa2ddef6cd571ebd9f0aa050805b81a54f",
	            "SandboxKey": "/var/run/docker/netns/39eaa0a5ee22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:31:dc:da:08:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "7930ac5db0626d63671f2a77753202141442de70603e1b4acf6213e0bd34944d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 6 (307.219814ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:32.871032  201057 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-158523 image ls --format yaml --alsologtostderr                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ ssh     │ functional-158523 ssh pgrep buildkitd                                                                           │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ image   │ functional-158523 image ls --format json --alsologtostderr                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image build -t localhost/my-image:functional-158523 testdata/build --alsologtostderr          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls --format table --alsologtostderr                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ delete  │ -p functional-158523                                                                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │ 09 Oct 25 19:27 UTC │
	│ start   │ ha-898615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- rollout status deployment/busybox                                                          │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:27:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:27:40.840216  194626 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:40.840464  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840473  194626 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:40.840478  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840669  194626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:27:40.841171  194626 out.go:368] Setting JSON to false
	I1009 19:27:40.842061  194626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4210,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:27:40.842167  194626 start.go:143] virtualization: kvm guest
	I1009 19:27:40.844410  194626 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:27:40.845759  194626 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:40.845801  194626 notify.go:221] Checking for updates...
	I1009 19:27:40.848757  194626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:40.850175  194626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:27:40.851491  194626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:27:40.852705  194626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:27:40.853892  194626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:40.855321  194626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:40.878925  194626 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:27:40.879089  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:40.938194  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.927865936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:40.938299  194626 docker.go:319] overlay module found
	I1009 19:27:40.940236  194626 out.go:179] * Using the docker driver based on user configuration
	I1009 19:27:40.941822  194626 start.go:309] selected driver: docker
	I1009 19:27:40.941843  194626 start.go:930] validating driver "docker" against <nil>
	I1009 19:27:40.941856  194626 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:40.942433  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:41.005454  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.995221815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:41.005685  194626 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 19:27:41.005966  194626 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:41.007942  194626 out.go:179] * Using Docker driver with root privileges
	I1009 19:27:41.009445  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:41.009493  194626 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:27:41.009501  194626 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:27:41.009585  194626 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 19:27:41.010949  194626 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:27:41.012190  194626 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:41.013322  194626 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:41.014388  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.014435  194626 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:27:41.014445  194626 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:41.014436  194626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:41.014539  194626 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:27:41.014554  194626 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:41.014900  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:41.014929  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json: {Name:mk248a27bef73ecdf1bd71857a83cb8ef52e81ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:41.034874  194626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:41.034900  194626 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:41.034920  194626 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:41.034961  194626 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:41.035085  194626 start.go:365] duration metric: took 102.16µs to acquireMachinesLock for "ha-898615"
	I1009 19:27:41.035119  194626 start.go:94] Provisioning new machine with config: &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:41.035201  194626 start.go:126] createHost starting for "" (driver="docker")
	I1009 19:27:41.037427  194626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:27:41.037659  194626 start.go:160] libmachine.API.Create for "ha-898615" (driver="docker")
	I1009 19:27:41.037695  194626 client.go:168] LocalClient.Create starting
	I1009 19:27:41.037764  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
	I1009 19:27:41.037804  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037821  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.037887  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
	I1009 19:27:41.037913  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037927  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.038348  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:27:41.055472  194626 cli_runner.go:211] docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:27:41.055548  194626 network_create.go:284] running [docker network inspect ha-898615] to gather additional debugging logs...
	I1009 19:27:41.055569  194626 cli_runner.go:164] Run: docker network inspect ha-898615
	W1009 19:27:41.074024  194626 cli_runner.go:211] docker network inspect ha-898615 returned with exit code 1
	I1009 19:27:41.074056  194626 network_create.go:287] error running [docker network inspect ha-898615]: docker network inspect ha-898615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-898615 not found
	I1009 19:27:41.074071  194626 network_create.go:289] output of [docker network inspect ha-898615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-898615 not found
	
	** /stderr **
	I1009 19:27:41.074158  194626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:41.091849  194626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a266e0}
	I1009 19:27:41.091899  194626 network_create.go:124] attempt to create docker network ha-898615 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 19:27:41.091955  194626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-898615 ha-898615
	I1009 19:27:41.153434  194626 network_create.go:108] docker network ha-898615 192.168.49.0/24 created
	I1009 19:27:41.153469  194626 kic.go:121] calculated static IP "192.168.49.2" for the "ha-898615" container
	I1009 19:27:41.153533  194626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:27:41.170560  194626 cli_runner.go:164] Run: docker volume create ha-898615 --label name.minikube.sigs.k8s.io=ha-898615 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:27:41.190645  194626 oci.go:103] Successfully created a docker volume ha-898615
	I1009 19:27:41.190737  194626 cli_runner.go:164] Run: docker run --rm --name ha-898615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --entrypoint /usr/bin/test -v ha-898615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:27:41.600938  194626 oci.go:107] Successfully prepared a docker volume ha-898615
	I1009 19:27:41.600967  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.600993  194626 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:27:41.601055  194626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:27:46.106521  194626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.505414435s)
	I1009 19:27:46.106562  194626 kic.go:203] duration metric: took 4.50556336s to extract preloaded images to volume ...
	W1009 19:27:46.106658  194626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 19:27:46.106697  194626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 19:27:46.106744  194626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:27:46.168307  194626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-898615 --name ha-898615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-898615 --network ha-898615 --ip 192.168.49.2 --volume ha-898615:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:27:46.438758  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Running}}
	I1009 19:27:46.461401  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.481444  194626 cli_runner.go:164] Run: docker exec ha-898615 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:27:46.527865  194626 oci.go:144] the created container "ha-898615" has a running status.
	I1009 19:27:46.527897  194626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa...
	I1009 19:27:46.644201  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:27:46.644249  194626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:27:46.674019  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.696195  194626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:27:46.696225  194626 kic_runner.go:114] Args: [docker exec --privileged ha-898615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:27:46.751886  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.774821  194626 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:46.774929  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.795147  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.795497  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.795517  194626 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:46.948463  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:46.948514  194626 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:27:46.948583  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.967551  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.967801  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.967821  194626 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:27:47.127119  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:47.127192  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.145629  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.145957  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.145990  194626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:47.294310  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:47.294342  194626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:27:47.294368  194626 ubuntu.go:190] setting up certificates
	I1009 19:27:47.294401  194626 provision.go:84] configureAuth start
	I1009 19:27:47.294454  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:47.312608  194626 provision.go:143] copyHostCerts
	I1009 19:27:47.312651  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312684  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:27:47.312731  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312817  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:27:47.312923  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.312952  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:27:47.312962  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.313014  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:27:47.313086  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313114  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:27:47.313124  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313163  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:27:47.313236  194626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:27:47.539620  194626 provision.go:177] copyRemoteCerts
	I1009 19:27:47.539697  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:47.539740  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.557819  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:47.662665  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:47.662781  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:47.683372  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:47.683457  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:47.701669  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:47.701736  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:47.720164  194626 provision.go:87] duration metric: took 425.744193ms to configureAuth
	I1009 19:27:47.720200  194626 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:47.720451  194626 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:47.720564  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.738437  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.738688  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.738714  194626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:48.000107  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:48.000134  194626 machine.go:96] duration metric: took 1.225290325s to provisionDockerMachine
	I1009 19:27:48.000148  194626 client.go:171] duration metric: took 6.962444548s to LocalClient.Create
	I1009 19:27:48.000174  194626 start.go:168] duration metric: took 6.962517267s to libmachine.API.Create "ha-898615"
	I1009 19:27:48.000183  194626 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:27:48.000199  194626 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:48.000266  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:48.000306  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.017815  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.123997  194626 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:48.127754  194626 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:48.127793  194626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:48.127806  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:27:48.127864  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:27:48.127968  194626 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:27:48.127983  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:27:48.128079  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:48.136280  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:48.157989  194626 start.go:297] duration metric: took 157.790225ms for postStartSetup
	I1009 19:27:48.158323  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.176302  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:48.176728  194626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:48.176785  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.195218  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.296952  194626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.301724  194626 start.go:129] duration metric: took 7.26650252s to createHost
	I1009 19:27:48.301754  194626 start.go:84] releasing machines lock for "ha-898615", held for 7.266653034s
	I1009 19:27:48.301837  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.319813  194626 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:48.319860  194626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.319866  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.319919  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.339000  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.339730  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.494727  194626 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:48.501535  194626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.537367  194626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.542466  194626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.542536  194626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.568823  194626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:27:48.568848  194626 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.568888  194626 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:27:48.568936  194626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.585765  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.598310  194626 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.598367  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.615925  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.634773  194626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:48.716369  194626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:48.805777  194626 docker.go:234] disabling docker service ...
	I1009 19:27:48.805849  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:48.826709  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:48.840263  194626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:48.923854  194626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.007492  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.020704  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.035117  194626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.035178  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.045691  194626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:27:49.045759  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.055054  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.064210  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.073237  194626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:49.081510  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.090399  194626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.104388  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.113513  194626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:49.121175  194626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:49.128977  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.201984  194626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:49.310640  194626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:49.310716  194626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:49.315028  194626 start.go:564] Will wait 60s for crictl version
	I1009 19:27:49.315098  194626 ssh_runner.go:195] Run: which crictl
	I1009 19:27:49.319134  194626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:49.344131  194626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:49.344210  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.374167  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.406880  194626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:49.408191  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:49.425730  194626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:49.430174  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.441269  194626 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:49.441374  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:49.441444  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.474537  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.474563  194626 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:49.474626  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.501408  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.501432  194626 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:49.501440  194626 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:49.501565  194626 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:49.501644  194626 ssh_runner.go:195] Run: crio config
	I1009 19:27:49.548225  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:49.548247  194626 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:27:49.548270  194626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:49.548295  194626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:49.548487  194626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:49.548516  194626 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:49.548561  194626 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:49.560569  194626 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:49.560666  194626 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:49.560713  194626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:49.568926  194626 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:49.569017  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:49.577516  194626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:49.590951  194626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:49.606324  194626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:27:49.618986  194626 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 19:27:49.633526  194626 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:49.637412  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.647415  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.727039  194626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:49.749827  194626 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:27:49.749848  194626 certs.go:195] generating shared ca certs ...
	I1009 19:27:49.749916  194626 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.750063  194626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:27:49.750105  194626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:27:49.750114  194626 certs.go:257] generating profile certs ...
	I1009 19:27:49.750166  194626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:27:49.750191  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt with IP's: []
	I1009 19:27:49.922286  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt ...
	I1009 19:27:49.922322  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt: {Name:mke5f0b5d846145d5885091c3fdef11a03b4705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923243  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key ...
	I1009 19:27:49.923266  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key: {Name:mkb5fe559278888df39f6f81eb44f11c5b40eebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923364  194626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38
	I1009 19:27:49.923404  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 19:27:50.174751  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 ...
	I1009 19:27:50.174785  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38: {Name:mk7158926fc69073b0db0c318bd9373ec8743788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.174962  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 ...
	I1009 19:27:50.174976  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38: {Name:mk3dc2ee28c7f607abddd15bf98fe46423492612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.175056  194626 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:27:50.175135  194626 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:27:50.175189  194626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:27:50.175204  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt with IP's: []
	I1009 19:27:50.225715  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt ...
	I1009 19:27:50.225750  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt: {Name:mk61103353c5c3a0cbf54b18a910b0c83f19ee7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.225929  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key ...
	I1009 19:27:50.225941  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key: {Name:mkbe3072f67f16772d02b0025253c832dee4c437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.226013  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.226031  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.226044  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.226054  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.226067  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.226077  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.226088  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.226099  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.226151  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:27:50.226183  194626 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.226195  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:27:50.226219  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.226240  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.226260  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:27:50.226298  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:50.226322  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.226336  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.226348  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.226853  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:50.245895  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:50.263570  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:50.281671  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:27:50.300223  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:27:50.317734  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:27:50.335066  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:50.352704  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:27:50.370437  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:27:50.389421  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:50.406983  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:27:50.425614  194626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:50.439040  194626 ssh_runner.go:195] Run: openssl version
	I1009 19:27:50.445304  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:50.454013  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457873  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457929  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.491581  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:50.500958  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:27:50.509597  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513479  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513541  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.548203  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:50.557261  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:27:50.565908  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570090  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570139  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.604463  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:50.613756  194626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:50.617456  194626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:27:50.617519  194626 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:50.617611  194626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:50.617699  194626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:50.651007  194626 cri.go:89] found id: ""
	I1009 19:27:50.651094  194626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:50.660625  194626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:27:50.669536  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:27:50.669585  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:27:50.678565  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:27:50.678583  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:27:50.678632  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:27:50.686673  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:27:50.686739  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:27:50.694481  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:27:50.702511  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:27:50.702571  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:27:50.711335  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.719367  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:27:50.719437  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.727079  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:27:50.735773  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:27:50.735828  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:27:50.743402  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:27:50.783476  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:27:50.783553  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:27:50.804988  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:27:50.805072  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:27:50.805120  194626 kubeadm.go:318] OS: Linux
	I1009 19:27:50.805170  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:27:50.805244  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:27:50.805294  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:27:50.805368  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:27:50.805464  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:27:50.805570  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:27:50.805644  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:27:50.805709  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:27:50.865026  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:27:50.865138  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:27:50.865228  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:27:50.872900  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:27:50.874725  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:27:50.874828  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:27:50.874946  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:27:51.069934  194626 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:27:51.167065  194626 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:27:51.280550  194626 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:27:51.481119  194626 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:27:51.788062  194626 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:27:51.788230  194626 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:51.872181  194626 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:27:51.872362  194626 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:52.084923  194626 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:27:52.283000  194626 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:27:52.633617  194626 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:27:52.633683  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:27:52.795142  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:27:52.946617  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:27:53.060032  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:27:53.125689  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:27:53.322626  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:27:53.323225  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:27:53.325508  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:27:53.328645  194626 out.go:252]   - Booting up control plane ...
	I1009 19:27:53.328749  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:27:53.328835  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:27:53.328914  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:27:53.343297  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:27:53.343474  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:27:53.350427  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:27:53.350584  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:27:53.350658  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:27:53.451355  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:27:53.451572  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:27:54.452237  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000953428s
	I1009 19:27:54.456489  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:27:54.456634  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:27:54.456766  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:27:54.456840  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:31:54.457899  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	I1009 19:31:54.458060  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	I1009 19:31:54.458160  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	I1009 19:31:54.458176  194626 kubeadm.go:318] 
	I1009 19:31:54.458302  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:31:54.458440  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:31:54.458603  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:31:54.458813  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:31:54.458922  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:31:54.459044  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:31:54.459063  194626 kubeadm.go:318] 
	I1009 19:31:54.462630  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:54.462803  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:31:54.463587  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1009 19:31:54.463708  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:31:54.463871  194626 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000953428s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:31:54.463985  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:31:57.247677  194626 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.783657743s)
	I1009 19:31:57.247781  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:31:57.261935  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:31:57.261998  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:31:57.270455  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:31:57.270478  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:31:57.270524  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:31:57.278767  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:31:57.278835  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:31:57.286752  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:31:57.295053  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:31:57.295113  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:31:57.303048  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.311557  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:31:57.311631  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.319853  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:31:57.328199  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:31:57.328265  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:31:57.336006  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:31:57.396540  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:57.457937  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:36:00.178531  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:36:00.178718  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:36:00.182333  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:36:00.182408  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:36:00.182502  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:36:00.182552  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:36:00.182582  194626 kubeadm.go:318] OS: Linux
	I1009 19:36:00.182645  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:36:00.182724  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:36:00.182769  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:36:00.182812  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:36:00.182853  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:36:00.182902  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:36:00.182967  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:36:00.183022  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:36:00.183086  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:36:00.183241  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:36:00.183374  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:36:00.183474  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:36:00.185572  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:36:00.185640  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:36:00.185695  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:36:00.185782  194626 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:36:00.185857  194626 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:36:00.185922  194626 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:36:00.185977  194626 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:36:00.186037  194626 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:36:00.186100  194626 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:36:00.186172  194626 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:36:00.186259  194626 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:36:00.186320  194626 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:36:00.186413  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:36:00.186457  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:36:00.186503  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:36:00.186567  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:36:00.186661  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:36:00.186746  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:36:00.186865  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:36:00.186965  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:36:00.188328  194626 out.go:252]   - Booting up control plane ...
	I1009 19:36:00.188439  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:36:00.188511  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:36:00.188565  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:36:00.188647  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:36:00.188734  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:36:00.188835  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:36:00.188946  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:36:00.188994  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:36:00.189107  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:36:00.189207  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:36:00.189257  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.196157ms
	I1009 19:36:00.189351  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:36:00.189534  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:36:00.189618  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:36:00.189686  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:36:00.189753  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	I1009 19:36:00.189820  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	I1009 19:36:00.189897  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	I1009 19:36:00.189910  194626 kubeadm.go:318] 
	I1009 19:36:00.190035  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:36:00.190115  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:36:00.190192  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:36:00.190268  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:36:00.190333  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:36:00.190440  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:36:00.190476  194626 kubeadm.go:318] 
	I1009 19:36:00.190536  194626 kubeadm.go:402] duration metric: took 8m9.573023353s to StartCluster
	I1009 19:36:00.190620  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:36:00.190688  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:36:00.221163  194626 cri.go:89] found id: ""
	I1009 19:36:00.221200  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.221211  194626 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:36:00.221218  194626 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:36:00.221282  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:36:00.248439  194626 cri.go:89] found id: ""
	I1009 19:36:00.248473  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.248486  194626 logs.go:284] No container was found matching "etcd"
	I1009 19:36:00.248498  194626 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:36:00.248564  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:36:00.275751  194626 cri.go:89] found id: ""
	I1009 19:36:00.275781  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.275792  194626 logs.go:284] No container was found matching "coredns"
	I1009 19:36:00.275801  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:36:00.275868  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:36:00.304183  194626 cri.go:89] found id: ""
	I1009 19:36:00.304218  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.304227  194626 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:36:00.304233  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:36:00.304286  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:36:00.333120  194626 cri.go:89] found id: ""
	I1009 19:36:00.333154  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.333164  194626 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:36:00.333171  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:36:00.333221  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:36:00.362504  194626 cri.go:89] found id: ""
	I1009 19:36:00.362527  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.362536  194626 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:36:00.362542  194626 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:36:00.362602  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:36:00.391912  194626 cri.go:89] found id: ""
	I1009 19:36:00.391939  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.391949  194626 logs.go:284] No container was found matching "kindnet"
	I1009 19:36:00.391964  194626 logs.go:123] Gathering logs for kubelet ...
	I1009 19:36:00.391982  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:36:00.460680  194626 logs.go:123] Gathering logs for dmesg ...
	I1009 19:36:00.460722  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:36:00.473600  194626 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:36:00.473634  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:36:00.538515  194626 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:36:00.538542  194626 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:36:00.538556  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:36:00.601750  194626 logs.go:123] Gathering logs for container status ...
	I1009 19:36:00.601793  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:36:00.631755  194626 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:36:00.631818  194626 out.go:285] * 
	W1009 19:36:00.631894  194626 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.631914  194626 out.go:285] * 
	W1009 19:36:00.633671  194626 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:36:00.637455  194626 out.go:203] 
	W1009 19:36:00.638829  194626 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.638866  194626 out.go:285] * 
	I1009 19:36:00.640615  194626 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:37:25 ha-898615 crio[777]: time="2025-10-09T19:37:25.029962448Z" level=info msg="createCtr: deleting container f0463fb05c9a236bd42767e21fccab4aaedc76d8a27f273f0b303fc1be13f493 from storage" id=f7c60711-99ef-4e00-a9dd-67c148495d8d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:25 ha-898615 crio[777]: time="2025-10-09T19:37:25.032912786Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=7d35822a-5efe-4427-b230-14e5799f66d4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:25 ha-898615 crio[777]: time="2025-10-09T19:37:25.033171537Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-898615_kube-system_bc0e16a1814c5389485436acdfc968ed_0" id=f7c60711-99ef-4e00-a9dd-67c148495d8d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.00094666Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=be7e00b5-29c5-4c96-bb3c-32e71b451835 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.002018236Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1ccd5bbb-e526-4f37-a424-321687933f28 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.003064189Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-898615/kube-controller-manager" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.003332904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.006742494Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.007212425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.026056442Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.027514114Z" level=info msg="createCtr: deleting container ID 36a51f78e539029fe2de362c3a84544e8cc3cf7dc7b666f7684fc54c2facb606 from idIndex" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.027573974Z" level=info msg="createCtr: removing container 36a51f78e539029fe2de362c3a84544e8cc3cf7dc7b666f7684fc54c2facb606" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.027610427Z" level=info msg="createCtr: deleting container 36a51f78e539029fe2de362c3a84544e8cc3cf7dc7b666f7684fc54c2facb606 from storage" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.029921646Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.001545609Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=02d2fba2-28a5-471d-ae43-8b572455a98b name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.003762187Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=2a3b67b3-badd-4ded-96d4-4e4daa736fa3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.004896442Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-898615/kube-apiserver" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.005099326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.008541201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.008958606Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.026617624Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.027992361Z" level=info msg="createCtr: deleting container ID 46b6cad4087c1b4658ce50e3012b46105eb18113385d6e456b9b9934d03e532f from idIndex" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.02802929Z" level=info msg="createCtr: removing container 46b6cad4087c1b4658ce50e3012b46105eb18113385d6e456b9b9934d03e532f" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.028068885Z" level=info msg="createCtr: deleting container 46b6cad4087c1b4658ce50e3012b46105eb18113385d6e456b9b9934d03e532f from storage" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.030415662Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-898615_kube-system_aad558bc9f2efde8d3c90d373798de18_0" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:37:33.483660    3023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:33.484201    3023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:33.485759    3023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:33.486263    3023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:33.487834    3023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:37:33 up  1:20,  0 user,  load average: 0.02, 0.05, 1.72
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:37:25 ha-898615 kubelet[1937]: E1009 19:37:25.033590    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:25 ha-898615 kubelet[1937]:         container kube-scheduler start failed in pod kube-scheduler-ha-898615_kube-system(bc0e16a1814c5389485436acdfc968ed): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:25 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:25 ha-898615 kubelet[1937]: E1009 19:37:25.034797    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-898615" podUID="bc0e16a1814c5389485436acdfc968ed"
	Oct 09 19:37:27 ha-898615 kubelet[1937]: E1009 19:37:27.643057    1937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:37:27 ha-898615 kubelet[1937]: I1009 19:37:27.812540    1937 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:37:27 ha-898615 kubelet[1937]: E1009 19:37:27.812939    1937 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.000453    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.030276    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:28 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:28 ha-898615 kubelet[1937]:  > podSandboxID="da9ccac0760165455089feb0a833fa92808edbd25f27148d0daf896a8d20b03b"
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.030421    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:28 ha-898615 kubelet[1937]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:28 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.030462    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	Oct 09 19:37:29 ha-898615 kubelet[1937]: E1009 19:37:29.717131    1937 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-898615.186ce986e63acb66  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-898615,UID:ha-898615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-898615 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-898615,},FirstTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,LastTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-898615,}"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.000922    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.020052    1937 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.030731    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:30 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:30 ha-898615 kubelet[1937]:  > podSandboxID="d0dc08f9b3192b73d8d0a9663be3c0a1eb6c98f898ae3f73ccefd6abc80c6b8e"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.030862    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:30 ha-898615 kubelet[1937]:         container kube-apiserver start failed in pod kube-apiserver-ha-898615_kube-system(aad558bc9f2efde8d3c90d373798de18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:30 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.030902    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-898615" podUID="aad558bc9f2efde8d3c90d373798de18"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 6 (304.573436ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:33.872416  201383 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (91.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (97.00983ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-898615"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:27:46.220087387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39eaa0a5ee22c7c1e0e0329f3f944afa2ddef6cd571ebd9f0aa050805b81a54f",
	            "SandboxKey": "/var/run/docker/netns/39eaa0a5ee22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:31:dc:da:08:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "7930ac5db0626d63671f2a77753202141442de70603e1b4acf6213e0bd34944d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 6 (300.133455ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:34.290114  201529 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-158523 ssh pgrep buildkitd                                                                           │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ image   │ functional-158523 image ls --format json --alsologtostderr                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image build -t localhost/my-image:functional-158523 testdata/build --alsologtostderr          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls --format table --alsologtostderr                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ delete  │ -p functional-158523                                                                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │ 09 Oct 25 19:27 UTC │
	│ start   │ ha-898615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- rollout status deployment/busybox                                                          │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:27:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:27:40.840216  194626 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:40.840464  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840473  194626 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:40.840478  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840669  194626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:27:40.841171  194626 out.go:368] Setting JSON to false
	I1009 19:27:40.842061  194626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4210,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:27:40.842167  194626 start.go:143] virtualization: kvm guest
	I1009 19:27:40.844410  194626 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:27:40.845759  194626 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:40.845801  194626 notify.go:221] Checking for updates...
	I1009 19:27:40.848757  194626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:40.850175  194626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:27:40.851491  194626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:27:40.852705  194626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:27:40.853892  194626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:40.855321  194626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:40.878925  194626 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:27:40.879089  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:40.938194  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.927865936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:40.938299  194626 docker.go:319] overlay module found
	I1009 19:27:40.940236  194626 out.go:179] * Using the docker driver based on user configuration
	I1009 19:27:40.941822  194626 start.go:309] selected driver: docker
	I1009 19:27:40.941843  194626 start.go:930] validating driver "docker" against <nil>
	I1009 19:27:40.941856  194626 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:40.942433  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:41.005454  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.995221815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:41.005685  194626 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 19:27:41.005966  194626 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:41.007942  194626 out.go:179] * Using Docker driver with root privileges
	I1009 19:27:41.009445  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:41.009493  194626 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:27:41.009501  194626 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:27:41.009585  194626 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 19:27:41.010949  194626 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:27:41.012190  194626 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:41.013322  194626 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:41.014388  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.014435  194626 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:27:41.014445  194626 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:41.014436  194626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:41.014539  194626 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:27:41.014554  194626 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:41.014900  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:41.014929  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json: {Name:mk248a27bef73ecdf1bd71857a83cb8ef52e81ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:41.034874  194626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:41.034900  194626 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:41.034920  194626 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:41.034961  194626 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:41.035085  194626 start.go:365] duration metric: took 102.16µs to acquireMachinesLock for "ha-898615"
	I1009 19:27:41.035119  194626 start.go:94] Provisioning new machine with config: &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:41.035201  194626 start.go:126] createHost starting for "" (driver="docker")
	I1009 19:27:41.037427  194626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:27:41.037659  194626 start.go:160] libmachine.API.Create for "ha-898615" (driver="docker")
	I1009 19:27:41.037695  194626 client.go:168] LocalClient.Create starting
	I1009 19:27:41.037764  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
	I1009 19:27:41.037804  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037821  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.037887  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
	I1009 19:27:41.037913  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037927  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.038348  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:27:41.055472  194626 cli_runner.go:211] docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:27:41.055548  194626 network_create.go:284] running [docker network inspect ha-898615] to gather additional debugging logs...
	I1009 19:27:41.055569  194626 cli_runner.go:164] Run: docker network inspect ha-898615
	W1009 19:27:41.074024  194626 cli_runner.go:211] docker network inspect ha-898615 returned with exit code 1
	I1009 19:27:41.074056  194626 network_create.go:287] error running [docker network inspect ha-898615]: docker network inspect ha-898615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-898615 not found
	I1009 19:27:41.074071  194626 network_create.go:289] output of [docker network inspect ha-898615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-898615 not found
	
	** /stderr **
	I1009 19:27:41.074158  194626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:41.091849  194626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a266e0}
	I1009 19:27:41.091899  194626 network_create.go:124] attempt to create docker network ha-898615 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 19:27:41.091955  194626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-898615 ha-898615
	I1009 19:27:41.153434  194626 network_create.go:108] docker network ha-898615 192.168.49.0/24 created
	I1009 19:27:41.153469  194626 kic.go:121] calculated static IP "192.168.49.2" for the "ha-898615" container
	I1009 19:27:41.153533  194626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:27:41.170560  194626 cli_runner.go:164] Run: docker volume create ha-898615 --label name.minikube.sigs.k8s.io=ha-898615 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:27:41.190645  194626 oci.go:103] Successfully created a docker volume ha-898615
	I1009 19:27:41.190737  194626 cli_runner.go:164] Run: docker run --rm --name ha-898615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --entrypoint /usr/bin/test -v ha-898615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:27:41.600938  194626 oci.go:107] Successfully prepared a docker volume ha-898615
	I1009 19:27:41.600967  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.600993  194626 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:27:41.601055  194626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:27:46.106521  194626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.505414435s)
	I1009 19:27:46.106562  194626 kic.go:203] duration metric: took 4.50556336s to extract preloaded images to volume ...
	W1009 19:27:46.106658  194626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 19:27:46.106697  194626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 19:27:46.106744  194626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:27:46.168307  194626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-898615 --name ha-898615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-898615 --network ha-898615 --ip 192.168.49.2 --volume ha-898615:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:27:46.438758  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Running}}
	I1009 19:27:46.461401  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.481444  194626 cli_runner.go:164] Run: docker exec ha-898615 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:27:46.527865  194626 oci.go:144] the created container "ha-898615" has a running status.
	I1009 19:27:46.527897  194626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa...
	I1009 19:27:46.644201  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:27:46.644249  194626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:27:46.674019  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.696195  194626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:27:46.696225  194626 kic_runner.go:114] Args: [docker exec --privileged ha-898615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:27:46.751886  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.774821  194626 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:46.774929  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.795147  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.795497  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.795517  194626 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:46.948463  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:46.948514  194626 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:27:46.948583  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.967551  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.967801  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.967821  194626 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:27:47.127119  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:47.127192  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.145629  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.145957  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.145990  194626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:47.294310  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:47.294342  194626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:27:47.294368  194626 ubuntu.go:190] setting up certificates
	I1009 19:27:47.294401  194626 provision.go:84] configureAuth start
	I1009 19:27:47.294454  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:47.312608  194626 provision.go:143] copyHostCerts
	I1009 19:27:47.312651  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312684  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:27:47.312731  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312817  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:27:47.312923  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.312952  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:27:47.312962  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.313014  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:27:47.313086  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313114  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:27:47.313124  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313163  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:27:47.313236  194626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:27:47.539620  194626 provision.go:177] copyRemoteCerts
	I1009 19:27:47.539697  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:47.539740  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.557819  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:47.662665  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:47.662781  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:47.683372  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:47.683457  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:47.701669  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:47.701736  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:47.720164  194626 provision.go:87] duration metric: took 425.744193ms to configureAuth
	I1009 19:27:47.720200  194626 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:47.720451  194626 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:47.720564  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.738437  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.738688  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.738714  194626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:48.000107  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:48.000134  194626 machine.go:96] duration metric: took 1.225290325s to provisionDockerMachine
	I1009 19:27:48.000148  194626 client.go:171] duration metric: took 6.962444548s to LocalClient.Create
	I1009 19:27:48.000174  194626 start.go:168] duration metric: took 6.962517267s to libmachine.API.Create "ha-898615"
	I1009 19:27:48.000183  194626 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:27:48.000199  194626 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:48.000266  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:48.000306  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.017815  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.123997  194626 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:48.127754  194626 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:48.127793  194626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:48.127806  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:27:48.127864  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:27:48.127968  194626 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:27:48.127983  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:27:48.128079  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:48.136280  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:48.157989  194626 start.go:297] duration metric: took 157.790225ms for postStartSetup
	I1009 19:27:48.158323  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.176302  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:48.176728  194626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:48.176785  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.195218  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.296952  194626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.301724  194626 start.go:129] duration metric: took 7.26650252s to createHost
	I1009 19:27:48.301754  194626 start.go:84] releasing machines lock for "ha-898615", held for 7.266653034s
	I1009 19:27:48.301837  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.319813  194626 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:48.319860  194626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.319866  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.319919  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.339000  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.339730  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.494727  194626 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:48.501535  194626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.537367  194626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.542466  194626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.542536  194626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.568823  194626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:27:48.568848  194626 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.568888  194626 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:27:48.568936  194626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.585765  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.598310  194626 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.598367  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.615925  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.634773  194626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:48.716369  194626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:48.805777  194626 docker.go:234] disabling docker service ...
	I1009 19:27:48.805849  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:48.826709  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:48.840263  194626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:48.923854  194626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.007492  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.020704  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.035117  194626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.035178  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.045691  194626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:27:49.045759  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.055054  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.064210  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.073237  194626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:49.081510  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.090399  194626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.104388  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.113513  194626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:49.121175  194626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:49.128977  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.201984  194626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:49.310640  194626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:49.310716  194626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:49.315028  194626 start.go:564] Will wait 60s for crictl version
	I1009 19:27:49.315098  194626 ssh_runner.go:195] Run: which crictl
	I1009 19:27:49.319134  194626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:49.344131  194626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:49.344210  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.374167  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.406880  194626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:49.408191  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:49.425730  194626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:49.430174  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.441269  194626 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:49.441374  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:49.441444  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.474537  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.474563  194626 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:49.474626  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.501408  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.501432  194626 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:49.501440  194626 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:49.501565  194626 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:49.501644  194626 ssh_runner.go:195] Run: crio config
	I1009 19:27:49.548225  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:49.548247  194626 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:27:49.548270  194626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:49.548295  194626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:49.548487  194626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:49.548516  194626 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:49.548561  194626 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:49.560569  194626 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:49.560666  194626 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:49.560713  194626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:49.568926  194626 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:49.569017  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:49.577516  194626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:49.590951  194626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:49.606324  194626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:27:49.618986  194626 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 19:27:49.633526  194626 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:49.637412  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.647415  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.727039  194626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:49.749827  194626 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:27:49.749848  194626 certs.go:195] generating shared ca certs ...
	I1009 19:27:49.749916  194626 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.750063  194626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:27:49.750105  194626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:27:49.750114  194626 certs.go:257] generating profile certs ...
	I1009 19:27:49.750166  194626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:27:49.750191  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt with IP's: []
	I1009 19:27:49.922286  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt ...
	I1009 19:27:49.922322  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt: {Name:mke5f0b5d846145d5885091c3fdef11a03b4705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923243  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key ...
	I1009 19:27:49.923266  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key: {Name:mkb5fe559278888df39f6f81eb44f11c5b40eebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923364  194626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38
	I1009 19:27:49.923404  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 19:27:50.174751  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 ...
	I1009 19:27:50.174785  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38: {Name:mk7158926fc69073b0db0c318bd9373ec8743788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.174962  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 ...
	I1009 19:27:50.174976  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38: {Name:mk3dc2ee28c7f607abddd15bf98fe46423492612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.175056  194626 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:27:50.175135  194626 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:27:50.175189  194626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:27:50.175204  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt with IP's: []
	I1009 19:27:50.225715  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt ...
	I1009 19:27:50.225750  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt: {Name:mk61103353c5c3a0cbf54b18a910b0c83f19ee7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.225929  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key ...
	I1009 19:27:50.225941  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key: {Name:mkbe3072f67f16772d02b0025253c832dee4c437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.226013  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.226031  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.226044  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.226054  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.226067  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.226077  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.226088  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.226099  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.226151  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:27:50.226183  194626 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.226195  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:27:50.226219  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.226240  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.226260  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:27:50.226298  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:50.226322  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.226336  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.226348  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.226853  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:50.245895  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:50.263570  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:50.281671  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:27:50.300223  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:27:50.317734  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:27:50.335066  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:50.352704  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:27:50.370437  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:27:50.389421  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:50.406983  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:27:50.425614  194626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:50.439040  194626 ssh_runner.go:195] Run: openssl version
	I1009 19:27:50.445304  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:50.454013  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457873  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457929  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.491581  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:50.500958  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:27:50.509597  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513479  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513541  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.548203  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:50.557261  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:27:50.565908  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570090  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570139  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.604463  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:50.613756  194626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:50.617456  194626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:27:50.617519  194626 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:50.617611  194626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:50.617699  194626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:50.651007  194626 cri.go:89] found id: ""
	I1009 19:27:50.651094  194626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:50.660625  194626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:27:50.669536  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:27:50.669585  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:27:50.678565  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:27:50.678583  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:27:50.678632  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:27:50.686673  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:27:50.686739  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:27:50.694481  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:27:50.702511  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:27:50.702571  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:27:50.711335  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.719367  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:27:50.719437  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.727079  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:27:50.735773  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:27:50.735828  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:27:50.743402  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:27:50.783476  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:27:50.783553  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:27:50.804988  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:27:50.805072  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:27:50.805120  194626 kubeadm.go:318] OS: Linux
	I1009 19:27:50.805170  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:27:50.805244  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:27:50.805294  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:27:50.805368  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:27:50.805464  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:27:50.805570  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:27:50.805644  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:27:50.805709  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:27:50.865026  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:27:50.865138  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:27:50.865228  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:27:50.872900  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:27:50.874725  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:27:50.874828  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:27:50.874946  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:27:51.069934  194626 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:27:51.167065  194626 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:27:51.280550  194626 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:27:51.481119  194626 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:27:51.788062  194626 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:27:51.788230  194626 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:51.872181  194626 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:27:51.872362  194626 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:52.084923  194626 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:27:52.283000  194626 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:27:52.633617  194626 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:27:52.633683  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:27:52.795142  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:27:52.946617  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:27:53.060032  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:27:53.125689  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:27:53.322626  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:27:53.323225  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:27:53.325508  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:27:53.328645  194626 out.go:252]   - Booting up control plane ...
	I1009 19:27:53.328749  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:27:53.328835  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:27:53.328914  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:27:53.343297  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:27:53.343474  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:27:53.350427  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:27:53.350584  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:27:53.350658  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:27:53.451355  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:27:53.451572  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:27:54.452237  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000953428s
	I1009 19:27:54.456489  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:27:54.456634  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:27:54.456766  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:27:54.456840  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:31:54.457899  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	I1009 19:31:54.458060  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	I1009 19:31:54.458160  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	I1009 19:31:54.458176  194626 kubeadm.go:318] 
	I1009 19:31:54.458302  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:31:54.458440  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:31:54.458603  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:31:54.458813  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:31:54.458922  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:31:54.459044  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:31:54.459063  194626 kubeadm.go:318] 
	I1009 19:31:54.462630  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:54.462803  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:31:54.463587  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1009 19:31:54.463708  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:31:54.463871  194626 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000953428s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:31:54.463985  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:31:57.247677  194626 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.783657743s)
	I1009 19:31:57.247781  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:31:57.261935  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:31:57.261998  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:31:57.270455  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:31:57.270478  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:31:57.270524  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:31:57.278767  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:31:57.278835  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:31:57.286752  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:31:57.295053  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:31:57.295113  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:31:57.303048  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.311557  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:31:57.311631  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.319853  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:31:57.328199  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:31:57.328265  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:31:57.336006  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:31:57.396540  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:57.457937  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:36:00.178531  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:36:00.178718  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:36:00.182333  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:36:00.182408  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:36:00.182502  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:36:00.182552  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:36:00.182582  194626 kubeadm.go:318] OS: Linux
	I1009 19:36:00.182645  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:36:00.182724  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:36:00.182769  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:36:00.182812  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:36:00.182853  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:36:00.182902  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:36:00.182967  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:36:00.183022  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:36:00.183086  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:36:00.183241  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:36:00.183374  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:36:00.183474  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:36:00.185572  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:36:00.185640  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:36:00.185695  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:36:00.185782  194626 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:36:00.185857  194626 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:36:00.185922  194626 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:36:00.185977  194626 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:36:00.186037  194626 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:36:00.186100  194626 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:36:00.186172  194626 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:36:00.186259  194626 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:36:00.186320  194626 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:36:00.186413  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:36:00.186457  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:36:00.186503  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:36:00.186567  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:36:00.186661  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:36:00.186746  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:36:00.186865  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:36:00.186965  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:36:00.188328  194626 out.go:252]   - Booting up control plane ...
	I1009 19:36:00.188439  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:36:00.188511  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:36:00.188565  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:36:00.188647  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:36:00.188734  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:36:00.188835  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:36:00.188946  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:36:00.188994  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:36:00.189107  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:36:00.189207  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:36:00.189257  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.196157ms
	I1009 19:36:00.189351  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:36:00.189534  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:36:00.189618  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:36:00.189686  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:36:00.189753  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	I1009 19:36:00.189820  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	I1009 19:36:00.189897  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	I1009 19:36:00.189910  194626 kubeadm.go:318] 
	I1009 19:36:00.190035  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:36:00.190115  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:36:00.190192  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:36:00.190268  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:36:00.190333  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:36:00.190440  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:36:00.190476  194626 kubeadm.go:318] 
	I1009 19:36:00.190536  194626 kubeadm.go:402] duration metric: took 8m9.573023353s to StartCluster
	I1009 19:36:00.190620  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:36:00.190688  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:36:00.221163  194626 cri.go:89] found id: ""
	I1009 19:36:00.221200  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.221211  194626 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:36:00.221218  194626 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:36:00.221282  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:36:00.248439  194626 cri.go:89] found id: ""
	I1009 19:36:00.248473  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.248486  194626 logs.go:284] No container was found matching "etcd"
	I1009 19:36:00.248498  194626 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:36:00.248564  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:36:00.275751  194626 cri.go:89] found id: ""
	I1009 19:36:00.275781  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.275792  194626 logs.go:284] No container was found matching "coredns"
	I1009 19:36:00.275801  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:36:00.275868  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:36:00.304183  194626 cri.go:89] found id: ""
	I1009 19:36:00.304218  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.304227  194626 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:36:00.304233  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:36:00.304286  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:36:00.333120  194626 cri.go:89] found id: ""
	I1009 19:36:00.333154  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.333164  194626 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:36:00.333171  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:36:00.333221  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:36:00.362504  194626 cri.go:89] found id: ""
	I1009 19:36:00.362527  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.362536  194626 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:36:00.362542  194626 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:36:00.362602  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:36:00.391912  194626 cri.go:89] found id: ""
	I1009 19:36:00.391939  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.391949  194626 logs.go:284] No container was found matching "kindnet"
	I1009 19:36:00.391964  194626 logs.go:123] Gathering logs for kubelet ...
	I1009 19:36:00.391982  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:36:00.460680  194626 logs.go:123] Gathering logs for dmesg ...
	I1009 19:36:00.460722  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:36:00.473600  194626 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:36:00.473634  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:36:00.538515  194626 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:36:00.538542  194626 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:36:00.538556  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:36:00.601750  194626 logs.go:123] Gathering logs for container status ...
	I1009 19:36:00.601793  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:36:00.631755  194626 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:36:00.631818  194626 out.go:285] * 
	W1009 19:36:00.631894  194626 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.631914  194626 out.go:285] * 
	W1009 19:36:00.633671  194626 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:36:00.637455  194626 out.go:203] 
	W1009 19:36:00.638829  194626 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.638866  194626 out.go:285] * 
	I1009 19:36:00.640615  194626 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:37:25 ha-898615 crio[777]: time="2025-10-09T19:37:25.029962448Z" level=info msg="createCtr: deleting container f0463fb05c9a236bd42767e21fccab4aaedc76d8a27f273f0b303fc1be13f493 from storage" id=f7c60711-99ef-4e00-a9dd-67c148495d8d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:25 ha-898615 crio[777]: time="2025-10-09T19:37:25.032912786Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=7d35822a-5efe-4427-b230-14e5799f66d4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:25 ha-898615 crio[777]: time="2025-10-09T19:37:25.033171537Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-898615_kube-system_bc0e16a1814c5389485436acdfc968ed_0" id=f7c60711-99ef-4e00-a9dd-67c148495d8d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.00094666Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=be7e00b5-29c5-4c96-bb3c-32e71b451835 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.002018236Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1ccd5bbb-e526-4f37-a424-321687933f28 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.003064189Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-898615/kube-controller-manager" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.003332904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.006742494Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.007212425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.026056442Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.027514114Z" level=info msg="createCtr: deleting container ID 36a51f78e539029fe2de362c3a84544e8cc3cf7dc7b666f7684fc54c2facb606 from idIndex" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.027573974Z" level=info msg="createCtr: removing container 36a51f78e539029fe2de362c3a84544e8cc3cf7dc7b666f7684fc54c2facb606" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.027610427Z" level=info msg="createCtr: deleting container 36a51f78e539029fe2de362c3a84544e8cc3cf7dc7b666f7684fc54c2facb606 from storage" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.029921646Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.001545609Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=02d2fba2-28a5-471d-ae43-8b572455a98b name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.003762187Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=2a3b67b3-badd-4ded-96d4-4e4daa736fa3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.004896442Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-898615/kube-apiserver" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.005099326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.008541201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.008958606Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.026617624Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.027992361Z" level=info msg="createCtr: deleting container ID 46b6cad4087c1b4658ce50e3012b46105eb18113385d6e456b9b9934d03e532f from idIndex" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.02802929Z" level=info msg="createCtr: removing container 46b6cad4087c1b4658ce50e3012b46105eb18113385d6e456b9b9934d03e532f" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.028068885Z" level=info msg="createCtr: deleting container 46b6cad4087c1b4658ce50e3012b46105eb18113385d6e456b9b9934d03e532f from storage" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.030415662Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-898615_kube-system_aad558bc9f2efde8d3c90d373798de18_0" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:37:34.898217    3180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:34.898766    3180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:34.900452    3180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:34.900974    3180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:34.902544    3180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:37:34 up  1:20,  0 user,  load average: 0.02, 0.05, 1.72
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:37:25 ha-898615 kubelet[1937]: E1009 19:37:25.034797    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-898615" podUID="bc0e16a1814c5389485436acdfc968ed"
	Oct 09 19:37:27 ha-898615 kubelet[1937]: E1009 19:37:27.643057    1937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:37:27 ha-898615 kubelet[1937]: I1009 19:37:27.812540    1937 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:37:27 ha-898615 kubelet[1937]: E1009 19:37:27.812939    1937 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.000453    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.030276    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:28 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:28 ha-898615 kubelet[1937]:  > podSandboxID="da9ccac0760165455089feb0a833fa92808edbd25f27148d0daf896a8d20b03b"
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.030421    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:28 ha-898615 kubelet[1937]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:28 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.030462    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	Oct 09 19:37:29 ha-898615 kubelet[1937]: E1009 19:37:29.717131    1937 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-898615.186ce986e63acb66  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-898615,UID:ha-898615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-898615 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-898615,},FirstTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,LastTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-898615,}"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.000922    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.020052    1937 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.030731    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:30 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:30 ha-898615 kubelet[1937]:  > podSandboxID="d0dc08f9b3192b73d8d0a9663be3c0a1eb6c98f898ae3f73ccefd6abc80c6b8e"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.030862    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:30 ha-898615 kubelet[1937]:         container kube-apiserver start failed in pod kube-apiserver-ha-898615_kube-system(aad558bc9f2efde8d3c90d373798de18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:30 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.030902    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-898615" podUID="aad558bc9f2efde8d3c90d373798de18"
	Oct 09 19:37:34 ha-898615 kubelet[1937]: E1009 19:37:34.644344    1937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:37:34 ha-898615 kubelet[1937]: I1009 19:37:34.814758    1937 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:37:34 ha-898615 kubelet[1937]: E1009 19:37:34.815194    1937 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 6 (311.031854ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:35.291032  201878 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 node add --alsologtostderr -v 5: exit status 103 (257.969942ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-898615 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-898615"

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:37:35.353932  201995 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:37:35.354281  201995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:35.354293  201995 out.go:374] Setting ErrFile to fd 2...
	I1009 19:37:35.354297  201995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:35.354513  201995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:37:35.354829  201995 mustload.go:65] Loading cluster: ha-898615
	I1009 19:37:35.355177  201995 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:37:35.355590  201995 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:37:35.373670  201995 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:37:35.374031  201995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:37:35.432218  201995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:37:35.422395563 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:37:35.432323  201995 api_server.go:166] Checking apiserver status ...
	I1009 19:37:35.432439  201995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:37:35.432521  201995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:37:35.451068  201995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	W1009 19:37:35.556796  201995 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:37:35.558893  201995 out.go:179] * The control-plane node ha-898615 apiserver is not running: (state=Stopped)
	I1009 19:37:35.560037  201995 out.go:179]   To start a cluster, run: "minikube start -p ha-898615"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-898615 node add --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:27:46.220087387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39eaa0a5ee22c7c1e0e0329f3f944afa2ddef6cd571ebd9f0aa050805b81a54f",
	            "SandboxKey": "/var/run/docker/netns/39eaa0a5ee22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:31:dc:da:08:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "7930ac5db0626d63671f2a77753202141442de70603e1b4acf6213e0bd34944d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 6 (306.500288ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:35.874608  202101 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-158523 image ls --format json --alsologtostderr                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image build -t localhost/my-image:functional-158523 testdata/build --alsologtostderr          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls --format table --alsologtostderr                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ delete  │ -p functional-158523                                                                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │ 09 Oct 25 19:27 UTC │
	│ start   │ ha-898615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- rollout status deployment/busybox                                                          │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node add --alsologtostderr -v 5                                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:27:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:27:40.840216  194626 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:40.840464  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840473  194626 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:40.840478  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840669  194626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:27:40.841171  194626 out.go:368] Setting JSON to false
	I1009 19:27:40.842061  194626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4210,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:27:40.842167  194626 start.go:143] virtualization: kvm guest
	I1009 19:27:40.844410  194626 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:27:40.845759  194626 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:40.845801  194626 notify.go:221] Checking for updates...
	I1009 19:27:40.848757  194626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:40.850175  194626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:27:40.851491  194626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:27:40.852705  194626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:27:40.853892  194626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:40.855321  194626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:40.878925  194626 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:27:40.879089  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:40.938194  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.927865936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:40.938299  194626 docker.go:319] overlay module found
	I1009 19:27:40.940236  194626 out.go:179] * Using the docker driver based on user configuration
	I1009 19:27:40.941822  194626 start.go:309] selected driver: docker
	I1009 19:27:40.941843  194626 start.go:930] validating driver "docker" against <nil>
	I1009 19:27:40.941856  194626 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:40.942433  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:41.005454  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.995221815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:41.005685  194626 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 19:27:41.005966  194626 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:41.007942  194626 out.go:179] * Using Docker driver with root privileges
	I1009 19:27:41.009445  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:41.009493  194626 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:27:41.009501  194626 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:27:41.009585  194626 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 19:27:41.010949  194626 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:27:41.012190  194626 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:41.013322  194626 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:41.014388  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.014435  194626 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:27:41.014445  194626 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:41.014436  194626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:41.014539  194626 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:27:41.014554  194626 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:41.014900  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:41.014929  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json: {Name:mk248a27bef73ecdf1bd71857a83cb8ef52e81ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:41.034874  194626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:41.034900  194626 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:41.034920  194626 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:41.034961  194626 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:41.035085  194626 start.go:365] duration metric: took 102.16µs to acquireMachinesLock for "ha-898615"
	I1009 19:27:41.035119  194626 start.go:94] Provisioning new machine with config: &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:41.035201  194626 start.go:126] createHost starting for "" (driver="docker")
	I1009 19:27:41.037427  194626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:27:41.037659  194626 start.go:160] libmachine.API.Create for "ha-898615" (driver="docker")
	I1009 19:27:41.037695  194626 client.go:168] LocalClient.Create starting
	I1009 19:27:41.037764  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
	I1009 19:27:41.037804  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037821  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.037887  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
	I1009 19:27:41.037913  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037927  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.038348  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:27:41.055472  194626 cli_runner.go:211] docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:27:41.055548  194626 network_create.go:284] running [docker network inspect ha-898615] to gather additional debugging logs...
	I1009 19:27:41.055569  194626 cli_runner.go:164] Run: docker network inspect ha-898615
	W1009 19:27:41.074024  194626 cli_runner.go:211] docker network inspect ha-898615 returned with exit code 1
	I1009 19:27:41.074056  194626 network_create.go:287] error running [docker network inspect ha-898615]: docker network inspect ha-898615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-898615 not found
	I1009 19:27:41.074071  194626 network_create.go:289] output of [docker network inspect ha-898615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-898615 not found
	
	** /stderr **
	I1009 19:27:41.074158  194626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:41.091849  194626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a266e0}
	I1009 19:27:41.091899  194626 network_create.go:124] attempt to create docker network ha-898615 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 19:27:41.091955  194626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-898615 ha-898615
	I1009 19:27:41.153434  194626 network_create.go:108] docker network ha-898615 192.168.49.0/24 created
	I1009 19:27:41.153469  194626 kic.go:121] calculated static IP "192.168.49.2" for the "ha-898615" container
	I1009 19:27:41.153533  194626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:27:41.170560  194626 cli_runner.go:164] Run: docker volume create ha-898615 --label name.minikube.sigs.k8s.io=ha-898615 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:27:41.190645  194626 oci.go:103] Successfully created a docker volume ha-898615
	I1009 19:27:41.190737  194626 cli_runner.go:164] Run: docker run --rm --name ha-898615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --entrypoint /usr/bin/test -v ha-898615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:27:41.600938  194626 oci.go:107] Successfully prepared a docker volume ha-898615
	I1009 19:27:41.600967  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.600993  194626 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:27:41.601055  194626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:27:46.106521  194626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.505414435s)
	I1009 19:27:46.106562  194626 kic.go:203] duration metric: took 4.50556336s to extract preloaded images to volume ...
	W1009 19:27:46.106658  194626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 19:27:46.106697  194626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 19:27:46.106744  194626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:27:46.168307  194626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-898615 --name ha-898615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-898615 --network ha-898615 --ip 192.168.49.2 --volume ha-898615:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:27:46.438758  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Running}}
	I1009 19:27:46.461401  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.481444  194626 cli_runner.go:164] Run: docker exec ha-898615 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:27:46.527865  194626 oci.go:144] the created container "ha-898615" has a running status.
	I1009 19:27:46.527897  194626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa...
	I1009 19:27:46.644201  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:27:46.644249  194626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:27:46.674019  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.696195  194626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:27:46.696225  194626 kic_runner.go:114] Args: [docker exec --privileged ha-898615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:27:46.751886  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.774821  194626 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:46.774929  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.795147  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.795497  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.795517  194626 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:46.948463  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:46.948514  194626 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:27:46.948583  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.967551  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.967801  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.967821  194626 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:27:47.127119  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:47.127192  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.145629  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.145957  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.145990  194626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:47.294310  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:47.294342  194626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:27:47.294368  194626 ubuntu.go:190] setting up certificates
	I1009 19:27:47.294401  194626 provision.go:84] configureAuth start
	I1009 19:27:47.294454  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:47.312608  194626 provision.go:143] copyHostCerts
	I1009 19:27:47.312651  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312684  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:27:47.312731  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312817  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:27:47.312923  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.312952  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:27:47.312962  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.313014  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:27:47.313086  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313114  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:27:47.313124  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313163  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:27:47.313236  194626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:27:47.539620  194626 provision.go:177] copyRemoteCerts
	I1009 19:27:47.539697  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:47.539740  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.557819  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:47.662665  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:47.662781  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:47.683372  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:47.683457  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:47.701669  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:47.701736  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:47.720164  194626 provision.go:87] duration metric: took 425.744193ms to configureAuth
	I1009 19:27:47.720200  194626 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:47.720451  194626 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:47.720564  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.738437  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.738688  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.738714  194626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:48.000107  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:48.000134  194626 machine.go:96] duration metric: took 1.225290325s to provisionDockerMachine
	I1009 19:27:48.000148  194626 client.go:171] duration metric: took 6.962444548s to LocalClient.Create
	I1009 19:27:48.000174  194626 start.go:168] duration metric: took 6.962517267s to libmachine.API.Create "ha-898615"
	I1009 19:27:48.000183  194626 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:27:48.000199  194626 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:48.000266  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:48.000306  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.017815  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.123997  194626 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:48.127754  194626 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:48.127793  194626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:48.127806  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:27:48.127864  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:27:48.127968  194626 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:27:48.127983  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:27:48.128079  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:48.136280  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:48.157989  194626 start.go:297] duration metric: took 157.790225ms for postStartSetup
	I1009 19:27:48.158323  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.176302  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:48.176728  194626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:48.176785  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.195218  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.296952  194626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.301724  194626 start.go:129] duration metric: took 7.26650252s to createHost
	I1009 19:27:48.301754  194626 start.go:84] releasing machines lock for "ha-898615", held for 7.266653034s
	I1009 19:27:48.301837  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.319813  194626 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:48.319860  194626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.319866  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.319919  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.339000  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.339730  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.494727  194626 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:48.501535  194626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.537367  194626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.542466  194626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.542536  194626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.568823  194626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:27:48.568848  194626 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.568888  194626 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:27:48.568936  194626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.585765  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.598310  194626 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.598367  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.615925  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.634773  194626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:48.716369  194626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:48.805777  194626 docker.go:234] disabling docker service ...
	I1009 19:27:48.805849  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:48.826709  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:48.840263  194626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:48.923854  194626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.007492  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.020704  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.035117  194626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.035178  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.045691  194626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:27:49.045759  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.055054  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.064210  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.073237  194626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:49.081510  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.090399  194626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.104388  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.113513  194626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:49.121175  194626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:49.128977  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.201984  194626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:49.310640  194626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:49.310716  194626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:49.315028  194626 start.go:564] Will wait 60s for crictl version
	I1009 19:27:49.315098  194626 ssh_runner.go:195] Run: which crictl
	I1009 19:27:49.319134  194626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:49.344131  194626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:49.344210  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.374167  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.406880  194626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:49.408191  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:49.425730  194626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:49.430174  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.441269  194626 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:49.441374  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:49.441444  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.474537  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.474563  194626 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:49.474626  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.501408  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.501432  194626 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:49.501440  194626 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:49.501565  194626 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:49.501644  194626 ssh_runner.go:195] Run: crio config
	I1009 19:27:49.548225  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:49.548247  194626 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:27:49.548270  194626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:49.548295  194626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:49.548487  194626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:49.548516  194626 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:49.548561  194626 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:49.560569  194626 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:49.560666  194626 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:49.560713  194626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:49.568926  194626 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:49.569017  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:49.577516  194626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:49.590951  194626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:49.606324  194626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:27:49.618986  194626 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 19:27:49.633526  194626 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:49.637412  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.647415  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.727039  194626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:49.749827  194626 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:27:49.749848  194626 certs.go:195] generating shared ca certs ...
	I1009 19:27:49.749916  194626 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.750063  194626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:27:49.750105  194626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:27:49.750114  194626 certs.go:257] generating profile certs ...
	I1009 19:27:49.750166  194626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:27:49.750191  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt with IP's: []
	I1009 19:27:49.922286  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt ...
	I1009 19:27:49.922322  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt: {Name:mke5f0b5d846145d5885091c3fdef11a03b4705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923243  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key ...
	I1009 19:27:49.923266  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key: {Name:mkb5fe559278888df39f6f81eb44f11c5b40eebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923364  194626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38
	I1009 19:27:49.923404  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 19:27:50.174751  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 ...
	I1009 19:27:50.174785  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38: {Name:mk7158926fc69073b0db0c318bd9373ec8743788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.174962  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 ...
	I1009 19:27:50.174976  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38: {Name:mk3dc2ee28c7f607abddd15bf98fe46423492612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.175056  194626 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:27:50.175135  194626 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:27:50.175189  194626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:27:50.175204  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt with IP's: []
	I1009 19:27:50.225715  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt ...
	I1009 19:27:50.225750  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt: {Name:mk61103353c5c3a0cbf54b18a910b0c83f19ee7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.225929  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key ...
	I1009 19:27:50.225941  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key: {Name:mkbe3072f67f16772d02b0025253c832dee4c437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.226013  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.226031  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.226044  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.226054  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.226067  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.226077  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.226088  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.226099  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.226151  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:27:50.226183  194626 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.226195  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:27:50.226219  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.226240  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.226260  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:27:50.226298  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:50.226322  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.226336  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.226348  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.226853  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:50.245895  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:50.263570  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:50.281671  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:27:50.300223  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:27:50.317734  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:27:50.335066  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:50.352704  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:27:50.370437  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:27:50.389421  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:50.406983  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:27:50.425614  194626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:50.439040  194626 ssh_runner.go:195] Run: openssl version
	I1009 19:27:50.445304  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:50.454013  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457873  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457929  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.491581  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:50.500958  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:27:50.509597  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513479  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513541  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.548203  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:50.557261  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:27:50.565908  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570090  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570139  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.604463  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:50.613756  194626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:50.617456  194626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:27:50.617519  194626 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:50.617611  194626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:50.617699  194626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:50.651007  194626 cri.go:89] found id: ""
	I1009 19:27:50.651094  194626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:50.660625  194626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:27:50.669536  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:27:50.669585  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:27:50.678565  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:27:50.678583  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:27:50.678632  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:27:50.686673  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:27:50.686739  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:27:50.694481  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:27:50.702511  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:27:50.702571  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:27:50.711335  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.719367  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:27:50.719437  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.727079  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:27:50.735773  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:27:50.735828  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:27:50.743402  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:27:50.783476  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:27:50.783553  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:27:50.804988  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:27:50.805072  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:27:50.805120  194626 kubeadm.go:318] OS: Linux
	I1009 19:27:50.805170  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:27:50.805244  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:27:50.805294  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:27:50.805368  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:27:50.805464  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:27:50.805570  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:27:50.805644  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:27:50.805709  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:27:50.865026  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:27:50.865138  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:27:50.865228  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:27:50.872900  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:27:50.874725  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:27:50.874828  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:27:50.874946  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:27:51.069934  194626 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:27:51.167065  194626 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:27:51.280550  194626 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:27:51.481119  194626 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:27:51.788062  194626 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:27:51.788230  194626 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:51.872181  194626 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:27:51.872362  194626 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:52.084923  194626 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:27:52.283000  194626 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:27:52.633617  194626 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:27:52.633683  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:27:52.795142  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:27:52.946617  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:27:53.060032  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:27:53.125689  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:27:53.322626  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:27:53.323225  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:27:53.325508  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:27:53.328645  194626 out.go:252]   - Booting up control plane ...
	I1009 19:27:53.328749  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:27:53.328835  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:27:53.328914  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:27:53.343297  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:27:53.343474  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:27:53.350427  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:27:53.350584  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:27:53.350658  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:27:53.451355  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:27:53.451572  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:27:54.452237  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000953428s
	I1009 19:27:54.456489  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:27:54.456634  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:27:54.456766  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:27:54.456840  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:31:54.457899  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	I1009 19:31:54.458060  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	I1009 19:31:54.458160  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	I1009 19:31:54.458176  194626 kubeadm.go:318] 
	I1009 19:31:54.458302  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:31:54.458440  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:31:54.458603  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:31:54.458813  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:31:54.458922  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:31:54.459044  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:31:54.459063  194626 kubeadm.go:318] 
	I1009 19:31:54.462630  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:54.462803  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:31:54.463587  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1009 19:31:54.463708  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:31:54.463871  194626 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000953428s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:31:54.463985  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:31:57.247677  194626 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.783657743s)
	I1009 19:31:57.247781  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:31:57.261935  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:31:57.261998  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:31:57.270455  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:31:57.270478  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:31:57.270524  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:31:57.278767  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:31:57.278835  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:31:57.286752  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:31:57.295053  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:31:57.295113  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:31:57.303048  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.311557  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:31:57.311631  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.319853  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:31:57.328199  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:31:57.328265  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:31:57.336006  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:31:57.396540  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:57.457937  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:36:00.178531  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:36:00.178718  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:36:00.182333  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:36:00.182408  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:36:00.182502  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:36:00.182552  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:36:00.182582  194626 kubeadm.go:318] OS: Linux
	I1009 19:36:00.182645  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:36:00.182724  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:36:00.182769  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:36:00.182812  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:36:00.182853  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:36:00.182902  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:36:00.182967  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:36:00.183022  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:36:00.183086  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:36:00.183241  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:36:00.183374  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:36:00.183474  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:36:00.185572  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:36:00.185640  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:36:00.185695  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:36:00.185782  194626 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:36:00.185857  194626 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:36:00.185922  194626 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:36:00.185977  194626 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:36:00.186037  194626 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:36:00.186100  194626 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:36:00.186172  194626 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:36:00.186259  194626 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:36:00.186320  194626 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:36:00.186413  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:36:00.186457  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:36:00.186503  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:36:00.186567  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:36:00.186661  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:36:00.186746  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:36:00.186865  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:36:00.186965  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:36:00.188328  194626 out.go:252]   - Booting up control plane ...
	I1009 19:36:00.188439  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:36:00.188511  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:36:00.188565  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:36:00.188647  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:36:00.188734  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:36:00.188835  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:36:00.188946  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:36:00.188994  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:36:00.189107  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:36:00.189207  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:36:00.189257  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.196157ms
	I1009 19:36:00.189351  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:36:00.189534  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:36:00.189618  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:36:00.189686  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:36:00.189753  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	I1009 19:36:00.189820  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	I1009 19:36:00.189897  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	I1009 19:36:00.189910  194626 kubeadm.go:318] 
	I1009 19:36:00.190035  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:36:00.190115  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:36:00.190192  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:36:00.190268  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:36:00.190333  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:36:00.190440  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:36:00.190476  194626 kubeadm.go:318] 
	I1009 19:36:00.190536  194626 kubeadm.go:402] duration metric: took 8m9.573023353s to StartCluster
	I1009 19:36:00.190620  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:36:00.190688  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:36:00.221163  194626 cri.go:89] found id: ""
	I1009 19:36:00.221200  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.221211  194626 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:36:00.221218  194626 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:36:00.221282  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:36:00.248439  194626 cri.go:89] found id: ""
	I1009 19:36:00.248473  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.248486  194626 logs.go:284] No container was found matching "etcd"
	I1009 19:36:00.248498  194626 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:36:00.248564  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:36:00.275751  194626 cri.go:89] found id: ""
	I1009 19:36:00.275781  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.275792  194626 logs.go:284] No container was found matching "coredns"
	I1009 19:36:00.275801  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:36:00.275868  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:36:00.304183  194626 cri.go:89] found id: ""
	I1009 19:36:00.304218  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.304227  194626 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:36:00.304233  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:36:00.304286  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:36:00.333120  194626 cri.go:89] found id: ""
	I1009 19:36:00.333154  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.333164  194626 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:36:00.333171  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:36:00.333221  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:36:00.362504  194626 cri.go:89] found id: ""
	I1009 19:36:00.362527  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.362536  194626 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:36:00.362542  194626 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:36:00.362602  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:36:00.391912  194626 cri.go:89] found id: ""
	I1009 19:36:00.391939  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.391949  194626 logs.go:284] No container was found matching "kindnet"
	I1009 19:36:00.391964  194626 logs.go:123] Gathering logs for kubelet ...
	I1009 19:36:00.391982  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:36:00.460680  194626 logs.go:123] Gathering logs for dmesg ...
	I1009 19:36:00.460722  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:36:00.473600  194626 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:36:00.473634  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:36:00.538515  194626 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:36:00.538542  194626 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:36:00.538556  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:36:00.601750  194626 logs.go:123] Gathering logs for container status ...
	I1009 19:36:00.601793  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:36:00.631755  194626 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:36:00.631818  194626 out.go:285] * 
	W1009 19:36:00.631894  194626 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.631914  194626 out.go:285] * 
	W1009 19:36:00.633671  194626 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:36:00.637455  194626 out.go:203] 
	W1009 19:36:00.638829  194626 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.638866  194626 out.go:285] * 
	I1009 19:36:00.640615  194626 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:37:25 ha-898615 crio[777]: time="2025-10-09T19:37:25.029962448Z" level=info msg="createCtr: deleting container f0463fb05c9a236bd42767e21fccab4aaedc76d8a27f273f0b303fc1be13f493 from storage" id=f7c60711-99ef-4e00-a9dd-67c148495d8d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:25 ha-898615 crio[777]: time="2025-10-09T19:37:25.032912786Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=7d35822a-5efe-4427-b230-14e5799f66d4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:25 ha-898615 crio[777]: time="2025-10-09T19:37:25.033171537Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-898615_kube-system_bc0e16a1814c5389485436acdfc968ed_0" id=f7c60711-99ef-4e00-a9dd-67c148495d8d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.00094666Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=be7e00b5-29c5-4c96-bb3c-32e71b451835 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.002018236Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1ccd5bbb-e526-4f37-a424-321687933f28 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.003064189Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-898615/kube-controller-manager" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.003332904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.006742494Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.007212425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.026056442Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.027514114Z" level=info msg="createCtr: deleting container ID 36a51f78e539029fe2de362c3a84544e8cc3cf7dc7b666f7684fc54c2facb606 from idIndex" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.027573974Z" level=info msg="createCtr: removing container 36a51f78e539029fe2de362c3a84544e8cc3cf7dc7b666f7684fc54c2facb606" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.027610427Z" level=info msg="createCtr: deleting container 36a51f78e539029fe2de362c3a84544e8cc3cf7dc7b666f7684fc54c2facb606 from storage" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.029921646Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.001545609Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=02d2fba2-28a5-471d-ae43-8b572455a98b name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.003762187Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=2a3b67b3-badd-4ded-96d4-4e4daa736fa3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.004896442Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-898615/kube-apiserver" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.005099326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.008541201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.008958606Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.026617624Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.027992361Z" level=info msg="createCtr: deleting container ID 46b6cad4087c1b4658ce50e3012b46105eb18113385d6e456b9b9934d03e532f from idIndex" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.02802929Z" level=info msg="createCtr: removing container 46b6cad4087c1b4658ce50e3012b46105eb18113385d6e456b9b9934d03e532f" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.028068885Z" level=info msg="createCtr: deleting container 46b6cad4087c1b4658ce50e3012b46105eb18113385d6e456b9b9934d03e532f from storage" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.030415662Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-898615_kube-system_aad558bc9f2efde8d3c90d373798de18_0" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:37:36.474523    3349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:36.475249    3349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:36.476825    3349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:36.477424    3349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:36.478974    3349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:37:36 up  1:20,  0 user,  load average: 0.02, 0.05, 1.72
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:37:25 ha-898615 kubelet[1937]: E1009 19:37:25.034797    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-898615" podUID="bc0e16a1814c5389485436acdfc968ed"
	Oct 09 19:37:27 ha-898615 kubelet[1937]: E1009 19:37:27.643057    1937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:37:27 ha-898615 kubelet[1937]: I1009 19:37:27.812540    1937 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:37:27 ha-898615 kubelet[1937]: E1009 19:37:27.812939    1937 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.000453    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.030276    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:28 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:28 ha-898615 kubelet[1937]:  > podSandboxID="da9ccac0760165455089feb0a833fa92808edbd25f27148d0daf896a8d20b03b"
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.030421    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:28 ha-898615 kubelet[1937]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:28 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.030462    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	Oct 09 19:37:29 ha-898615 kubelet[1937]: E1009 19:37:29.717131    1937 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-898615.186ce986e63acb66  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-898615,UID:ha-898615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-898615 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-898615,},FirstTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,LastTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-898615,}"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.000922    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.020052    1937 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.030731    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:30 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:30 ha-898615 kubelet[1937]:  > podSandboxID="d0dc08f9b3192b73d8d0a9663be3c0a1eb6c98f898ae3f73ccefd6abc80c6b8e"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.030862    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:30 ha-898615 kubelet[1937]:         container kube-apiserver start failed in pod kube-apiserver-ha-898615_kube-system(aad558bc9f2efde8d3c90d373798de18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:30 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.030902    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-898615" podUID="aad558bc9f2efde8d3c90d373798de18"
	Oct 09 19:37:34 ha-898615 kubelet[1937]: E1009 19:37:34.644344    1937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:37:34 ha-898615 kubelet[1937]: I1009 19:37:34.814758    1937 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:37:34 ha-898615 kubelet[1937]: E1009 19:37:34.815194    1937 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 6 (302.49127ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:36.854341  202434 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (1.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-898615 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-898615 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (46.685538ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-898615

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-898615 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-898615 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:27:46.220087387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39eaa0a5ee22c7c1e0e0329f3f944afa2ddef6cd571ebd9f0aa050805b81a54f",
	            "SandboxKey": "/var/run/docker/netns/39eaa0a5ee22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:31:dc:da:08:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "7930ac5db0626d63671f2a77753202141442de70603e1b4acf6213e0bd34944d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 6 (305.53079ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:37.225990  202569 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-158523 image ls --format json --alsologtostderr                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image build -t localhost/my-image:functional-158523 testdata/build --alsologtostderr          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls --format table --alsologtostderr                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ delete  │ -p functional-158523                                                                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │ 09 Oct 25 19:27 UTC │
	│ start   │ ha-898615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- rollout status deployment/busybox                                                          │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node add --alsologtostderr -v 5                                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:27:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:27:40.840216  194626 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:40.840464  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840473  194626 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:40.840478  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840669  194626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:27:40.841171  194626 out.go:368] Setting JSON to false
	I1009 19:27:40.842061  194626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4210,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:27:40.842167  194626 start.go:143] virtualization: kvm guest
	I1009 19:27:40.844410  194626 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:27:40.845759  194626 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:40.845801  194626 notify.go:221] Checking for updates...
	I1009 19:27:40.848757  194626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:40.850175  194626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:27:40.851491  194626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:27:40.852705  194626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:27:40.853892  194626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:40.855321  194626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:40.878925  194626 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:27:40.879089  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:40.938194  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.927865936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:40.938299  194626 docker.go:319] overlay module found
	I1009 19:27:40.940236  194626 out.go:179] * Using the docker driver based on user configuration
	I1009 19:27:40.941822  194626 start.go:309] selected driver: docker
	I1009 19:27:40.941843  194626 start.go:930] validating driver "docker" against <nil>
	I1009 19:27:40.941856  194626 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:40.942433  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:41.005454  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.995221815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:41.005685  194626 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 19:27:41.005966  194626 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:41.007942  194626 out.go:179] * Using Docker driver with root privileges
	I1009 19:27:41.009445  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:41.009493  194626 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:27:41.009501  194626 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:27:41.009585  194626 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 19:27:41.010949  194626 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:27:41.012190  194626 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:41.013322  194626 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:41.014388  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.014435  194626 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:27:41.014445  194626 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:41.014436  194626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:41.014539  194626 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:27:41.014554  194626 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:41.014900  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:41.014929  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json: {Name:mk248a27bef73ecdf1bd71857a83cb8ef52e81ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:41.034874  194626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:41.034900  194626 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:41.034920  194626 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:41.034961  194626 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:41.035085  194626 start.go:365] duration metric: took 102.16µs to acquireMachinesLock for "ha-898615"
	I1009 19:27:41.035119  194626 start.go:94] Provisioning new machine with config: &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:41.035201  194626 start.go:126] createHost starting for "" (driver="docker")
	I1009 19:27:41.037427  194626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:27:41.037659  194626 start.go:160] libmachine.API.Create for "ha-898615" (driver="docker")
	I1009 19:27:41.037695  194626 client.go:168] LocalClient.Create starting
	I1009 19:27:41.037764  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
	I1009 19:27:41.037804  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037821  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.037887  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
	I1009 19:27:41.037913  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037927  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.038348  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:27:41.055472  194626 cli_runner.go:211] docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:27:41.055548  194626 network_create.go:284] running [docker network inspect ha-898615] to gather additional debugging logs...
	I1009 19:27:41.055569  194626 cli_runner.go:164] Run: docker network inspect ha-898615
	W1009 19:27:41.074024  194626 cli_runner.go:211] docker network inspect ha-898615 returned with exit code 1
	I1009 19:27:41.074056  194626 network_create.go:287] error running [docker network inspect ha-898615]: docker network inspect ha-898615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-898615 not found
	I1009 19:27:41.074071  194626 network_create.go:289] output of [docker network inspect ha-898615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-898615 not found
	
	** /stderr **
	I1009 19:27:41.074158  194626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:41.091849  194626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a266e0}
	I1009 19:27:41.091899  194626 network_create.go:124] attempt to create docker network ha-898615 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 19:27:41.091955  194626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-898615 ha-898615
	I1009 19:27:41.153434  194626 network_create.go:108] docker network ha-898615 192.168.49.0/24 created
	I1009 19:27:41.153469  194626 kic.go:121] calculated static IP "192.168.49.2" for the "ha-898615" container
	I1009 19:27:41.153533  194626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:27:41.170560  194626 cli_runner.go:164] Run: docker volume create ha-898615 --label name.minikube.sigs.k8s.io=ha-898615 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:27:41.190645  194626 oci.go:103] Successfully created a docker volume ha-898615
	I1009 19:27:41.190737  194626 cli_runner.go:164] Run: docker run --rm --name ha-898615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --entrypoint /usr/bin/test -v ha-898615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:27:41.600938  194626 oci.go:107] Successfully prepared a docker volume ha-898615
	I1009 19:27:41.600967  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.600993  194626 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:27:41.601055  194626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:27:46.106521  194626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.505414435s)
	I1009 19:27:46.106562  194626 kic.go:203] duration metric: took 4.50556336s to extract preloaded images to volume ...
	W1009 19:27:46.106658  194626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 19:27:46.106697  194626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 19:27:46.106744  194626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:27:46.168307  194626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-898615 --name ha-898615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-898615 --network ha-898615 --ip 192.168.49.2 --volume ha-898615:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:27:46.438758  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Running}}
	I1009 19:27:46.461401  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.481444  194626 cli_runner.go:164] Run: docker exec ha-898615 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:27:46.527865  194626 oci.go:144] the created container "ha-898615" has a running status.
	I1009 19:27:46.527897  194626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa...
	I1009 19:27:46.644201  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:27:46.644249  194626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:27:46.674019  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.696195  194626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:27:46.696225  194626 kic_runner.go:114] Args: [docker exec --privileged ha-898615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:27:46.751886  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.774821  194626 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:46.774929  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.795147  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.795497  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.795517  194626 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:46.948463  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:46.948514  194626 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:27:46.948583  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.967551  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.967801  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.967821  194626 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:27:47.127119  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:47.127192  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.145629  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.145957  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.145990  194626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:47.294310  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:47.294342  194626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:27:47.294368  194626 ubuntu.go:190] setting up certificates
	I1009 19:27:47.294401  194626 provision.go:84] configureAuth start
	I1009 19:27:47.294454  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:47.312608  194626 provision.go:143] copyHostCerts
	I1009 19:27:47.312651  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312684  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:27:47.312731  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312817  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:27:47.312923  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.312952  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:27:47.312962  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.313014  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:27:47.313086  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313114  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:27:47.313124  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313163  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:27:47.313236  194626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:27:47.539620  194626 provision.go:177] copyRemoteCerts
	I1009 19:27:47.539697  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:47.539740  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.557819  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:47.662665  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:47.662781  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:47.683372  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:47.683457  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:47.701669  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:47.701736  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:47.720164  194626 provision.go:87] duration metric: took 425.744193ms to configureAuth
	I1009 19:27:47.720200  194626 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:47.720451  194626 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:47.720564  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.738437  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.738688  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.738714  194626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:48.000107  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:48.000134  194626 machine.go:96] duration metric: took 1.225290325s to provisionDockerMachine
	I1009 19:27:48.000148  194626 client.go:171] duration metric: took 6.962444548s to LocalClient.Create
	I1009 19:27:48.000174  194626 start.go:168] duration metric: took 6.962517267s to libmachine.API.Create "ha-898615"
	I1009 19:27:48.000183  194626 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:27:48.000199  194626 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:48.000266  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:48.000306  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.017815  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.123997  194626 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:48.127754  194626 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:48.127793  194626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:48.127806  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:27:48.127864  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:27:48.127968  194626 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:27:48.127983  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:27:48.128079  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:48.136280  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:48.157989  194626 start.go:297] duration metric: took 157.790225ms for postStartSetup
	I1009 19:27:48.158323  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.176302  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:48.176728  194626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:48.176785  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.195218  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.296952  194626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.301724  194626 start.go:129] duration metric: took 7.26650252s to createHost
	I1009 19:27:48.301754  194626 start.go:84] releasing machines lock for "ha-898615", held for 7.266653034s
	I1009 19:27:48.301837  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.319813  194626 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:48.319860  194626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.319866  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.319919  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.339000  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.339730  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.494727  194626 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:48.501535  194626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.537367  194626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.542466  194626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.542536  194626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.568823  194626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:27:48.568848  194626 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.568888  194626 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:27:48.568936  194626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.585765  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.598310  194626 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.598367  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.615925  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.634773  194626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:48.716369  194626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:48.805777  194626 docker.go:234] disabling docker service ...
	I1009 19:27:48.805849  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:48.826709  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:48.840263  194626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:48.923854  194626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.007492  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.020704  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.035117  194626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.035178  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.045691  194626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:27:49.045759  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.055054  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.064210  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.073237  194626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:49.081510  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.090399  194626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.104388  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.113513  194626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:49.121175  194626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:49.128977  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.201984  194626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:49.310640  194626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:49.310716  194626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:49.315028  194626 start.go:564] Will wait 60s for crictl version
	I1009 19:27:49.315098  194626 ssh_runner.go:195] Run: which crictl
	I1009 19:27:49.319134  194626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:49.344131  194626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:49.344210  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.374167  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.406880  194626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:49.408191  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:49.425730  194626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:49.430174  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.441269  194626 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:49.441374  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:49.441444  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.474537  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.474563  194626 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:49.474626  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.501408  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.501432  194626 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:49.501440  194626 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:49.501565  194626 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:49.501644  194626 ssh_runner.go:195] Run: crio config
	I1009 19:27:49.548225  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:49.548247  194626 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:27:49.548270  194626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:49.548295  194626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:49.548487  194626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:49.548516  194626 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:49.548561  194626 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:49.560569  194626 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:49.560666  194626 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:49.560713  194626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:49.568926  194626 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:49.569017  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:49.577516  194626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:49.590951  194626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:49.606324  194626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:27:49.618986  194626 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 19:27:49.633526  194626 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:49.637412  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.647415  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.727039  194626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:49.749827  194626 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:27:49.749848  194626 certs.go:195] generating shared ca certs ...
	I1009 19:27:49.749916  194626 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.750063  194626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:27:49.750105  194626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:27:49.750114  194626 certs.go:257] generating profile certs ...
	I1009 19:27:49.750166  194626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:27:49.750191  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt with IP's: []
	I1009 19:27:49.922286  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt ...
	I1009 19:27:49.922322  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt: {Name:mke5f0b5d846145d5885091c3fdef11a03b4705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923243  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key ...
	I1009 19:27:49.923266  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key: {Name:mkb5fe559278888df39f6f81eb44f11c5b40eebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923364  194626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38
	I1009 19:27:49.923404  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 19:27:50.174751  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 ...
	I1009 19:27:50.174785  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38: {Name:mk7158926fc69073b0db0c318bd9373ec8743788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.174962  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 ...
	I1009 19:27:50.174976  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38: {Name:mk3dc2ee28c7f607abddd15bf98fe46423492612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.175056  194626 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:27:50.175135  194626 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:27:50.175189  194626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:27:50.175204  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt with IP's: []
	I1009 19:27:50.225715  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt ...
	I1009 19:27:50.225750  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt: {Name:mk61103353c5c3a0cbf54b18a910b0c83f19ee7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.225929  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key ...
	I1009 19:27:50.225941  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key: {Name:mkbe3072f67f16772d02b0025253c832dee4c437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.226013  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.226031  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.226044  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.226054  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.226067  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.226077  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.226088  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.226099  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.226151  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:27:50.226183  194626 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.226195  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:27:50.226219  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.226240  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.226260  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:27:50.226298  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:50.226322  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.226336  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.226348  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.226853  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:50.245895  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:50.263570  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:50.281671  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:27:50.300223  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:27:50.317734  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:27:50.335066  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:50.352704  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:27:50.370437  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:27:50.389421  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:50.406983  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:27:50.425614  194626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:50.439040  194626 ssh_runner.go:195] Run: openssl version
	I1009 19:27:50.445304  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:50.454013  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457873  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457929  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.491581  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:50.500958  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:27:50.509597  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513479  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513541  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.548203  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:50.557261  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:27:50.565908  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570090  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570139  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.604463  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:50.613756  194626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:50.617456  194626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:27:50.617519  194626 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:50.617611  194626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:50.617699  194626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:50.651007  194626 cri.go:89] found id: ""
	I1009 19:27:50.651094  194626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:50.660625  194626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:27:50.669536  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:27:50.669585  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:27:50.678565  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:27:50.678583  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:27:50.678632  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:27:50.686673  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:27:50.686739  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:27:50.694481  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:27:50.702511  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:27:50.702571  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:27:50.711335  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.719367  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:27:50.719437  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.727079  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:27:50.735773  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:27:50.735828  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:27:50.743402  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:27:50.783476  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:27:50.783553  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:27:50.804988  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:27:50.805072  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:27:50.805120  194626 kubeadm.go:318] OS: Linux
	I1009 19:27:50.805170  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:27:50.805244  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:27:50.805294  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:27:50.805368  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:27:50.805464  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:27:50.805570  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:27:50.805644  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:27:50.805709  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:27:50.865026  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:27:50.865138  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:27:50.865228  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:27:50.872900  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:27:50.874725  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:27:50.874828  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:27:50.874946  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:27:51.069934  194626 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:27:51.167065  194626 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:27:51.280550  194626 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:27:51.481119  194626 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:27:51.788062  194626 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:27:51.788230  194626 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:51.872181  194626 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:27:51.872362  194626 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:52.084923  194626 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:27:52.283000  194626 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:27:52.633617  194626 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:27:52.633683  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:27:52.795142  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:27:52.946617  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:27:53.060032  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:27:53.125689  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:27:53.322626  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:27:53.323225  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:27:53.325508  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:27:53.328645  194626 out.go:252]   - Booting up control plane ...
	I1009 19:27:53.328749  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:27:53.328835  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:27:53.328914  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:27:53.343297  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:27:53.343474  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:27:53.350427  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:27:53.350584  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:27:53.350658  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:27:53.451355  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:27:53.451572  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:27:54.452237  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000953428s
	I1009 19:27:54.456489  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:27:54.456634  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:27:54.456766  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:27:54.456840  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:31:54.457899  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	I1009 19:31:54.458060  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	I1009 19:31:54.458160  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	I1009 19:31:54.458176  194626 kubeadm.go:318] 
	I1009 19:31:54.458302  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:31:54.458440  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:31:54.458603  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:31:54.458813  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:31:54.458922  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:31:54.459044  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:31:54.459063  194626 kubeadm.go:318] 
	I1009 19:31:54.462630  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:54.462803  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:31:54.463587  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1009 19:31:54.463708  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:31:54.463871  194626 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000953428s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:31:54.463985  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:31:57.247677  194626 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.783657743s)
	I1009 19:31:57.247781  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:31:57.261935  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:31:57.261998  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:31:57.270455  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:31:57.270478  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:31:57.270524  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:31:57.278767  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:31:57.278835  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:31:57.286752  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:31:57.295053  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:31:57.295113  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:31:57.303048  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.311557  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:31:57.311631  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.319853  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:31:57.328199  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:31:57.328265  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:31:57.336006  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:31:57.396540  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:57.457937  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:36:00.178531  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:36:00.178718  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:36:00.182333  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:36:00.182408  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:36:00.182502  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:36:00.182552  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:36:00.182582  194626 kubeadm.go:318] OS: Linux
	I1009 19:36:00.182645  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:36:00.182724  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:36:00.182769  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:36:00.182812  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:36:00.182853  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:36:00.182902  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:36:00.182967  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:36:00.183022  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:36:00.183086  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:36:00.183241  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:36:00.183374  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:36:00.183474  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:36:00.185572  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:36:00.185640  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:36:00.185695  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:36:00.185782  194626 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:36:00.185857  194626 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:36:00.185922  194626 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:36:00.185977  194626 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:36:00.186037  194626 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:36:00.186100  194626 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:36:00.186172  194626 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:36:00.186259  194626 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:36:00.186320  194626 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:36:00.186413  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:36:00.186457  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:36:00.186503  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:36:00.186567  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:36:00.186661  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:36:00.186746  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:36:00.186865  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:36:00.186965  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:36:00.188328  194626 out.go:252]   - Booting up control plane ...
	I1009 19:36:00.188439  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:36:00.188511  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:36:00.188565  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:36:00.188647  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:36:00.188734  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:36:00.188835  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:36:00.188946  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:36:00.188994  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:36:00.189107  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:36:00.189207  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:36:00.189257  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.196157ms
	I1009 19:36:00.189351  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:36:00.189534  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:36:00.189618  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:36:00.189686  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:36:00.189753  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	I1009 19:36:00.189820  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	I1009 19:36:00.189897  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	I1009 19:36:00.189910  194626 kubeadm.go:318] 
	I1009 19:36:00.190035  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:36:00.190115  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:36:00.190192  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:36:00.190268  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:36:00.190333  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:36:00.190440  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:36:00.190476  194626 kubeadm.go:318] 
	I1009 19:36:00.190536  194626 kubeadm.go:402] duration metric: took 8m9.573023353s to StartCluster
	I1009 19:36:00.190620  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:36:00.190688  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:36:00.221163  194626 cri.go:89] found id: ""
	I1009 19:36:00.221200  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.221211  194626 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:36:00.221218  194626 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:36:00.221282  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:36:00.248439  194626 cri.go:89] found id: ""
	I1009 19:36:00.248473  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.248486  194626 logs.go:284] No container was found matching "etcd"
	I1009 19:36:00.248498  194626 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:36:00.248564  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:36:00.275751  194626 cri.go:89] found id: ""
	I1009 19:36:00.275781  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.275792  194626 logs.go:284] No container was found matching "coredns"
	I1009 19:36:00.275801  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:36:00.275868  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:36:00.304183  194626 cri.go:89] found id: ""
	I1009 19:36:00.304218  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.304227  194626 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:36:00.304233  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:36:00.304286  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:36:00.333120  194626 cri.go:89] found id: ""
	I1009 19:36:00.333154  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.333164  194626 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:36:00.333171  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:36:00.333221  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:36:00.362504  194626 cri.go:89] found id: ""
	I1009 19:36:00.362527  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.362536  194626 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:36:00.362542  194626 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:36:00.362602  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:36:00.391912  194626 cri.go:89] found id: ""
	I1009 19:36:00.391939  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.391949  194626 logs.go:284] No container was found matching "kindnet"
	I1009 19:36:00.391964  194626 logs.go:123] Gathering logs for kubelet ...
	I1009 19:36:00.391982  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:36:00.460680  194626 logs.go:123] Gathering logs for dmesg ...
	I1009 19:36:00.460722  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:36:00.473600  194626 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:36:00.473634  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:36:00.538515  194626 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:36:00.538542  194626 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:36:00.538556  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:36:00.601750  194626 logs.go:123] Gathering logs for container status ...
	I1009 19:36:00.601793  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:36:00.631755  194626 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:36:00.631818  194626 out.go:285] * 
	W1009 19:36:00.631894  194626 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.631914  194626 out.go:285] * 
	W1009 19:36:00.633671  194626 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:36:00.637455  194626 out.go:203] 
	W1009 19:36:00.638829  194626 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.638866  194626 out.go:285] * 
	I1009 19:36:00.640615  194626 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.027573974Z" level=info msg="createCtr: removing container 36a51f78e539029fe2de362c3a84544e8cc3cf7dc7b666f7684fc54c2facb606" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.027610427Z" level=info msg="createCtr: deleting container 36a51f78e539029fe2de362c3a84544e8cc3cf7dc7b666f7684fc54c2facb606 from storage" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.029921646Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.001545609Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=02d2fba2-28a5-471d-ae43-8b572455a98b name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.003762187Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=2a3b67b3-badd-4ded-96d4-4e4daa736fa3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.004896442Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-898615/kube-apiserver" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.005099326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.008541201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.008958606Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.026617624Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.027992361Z" level=info msg="createCtr: deleting container ID 46b6cad4087c1b4658ce50e3012b46105eb18113385d6e456b9b9934d03e532f from idIndex" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.02802929Z" level=info msg="createCtr: removing container 46b6cad4087c1b4658ce50e3012b46105eb18113385d6e456b9b9934d03e532f" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.028068885Z" level=info msg="createCtr: deleting container 46b6cad4087c1b4658ce50e3012b46105eb18113385d6e456b9b9934d03e532f from storage" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.030415662Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-898615_kube-system_aad558bc9f2efde8d3c90d373798de18_0" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.001021942Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=3f44f988-8639-488a-803b-9563d41e2ed7 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.002120014Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=7c5ce8fe-0779-4178-8c58-fac7fa186ba3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.0033072Z" level=info msg="Creating container: kube-system/etcd-ha-898615/etcd" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.003686801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.008487936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.009158437Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.031053012Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.032838277Z" level=info msg="createCtr: deleting container ID 70ffc7f39282e890f5d610a6d55742a39f611c4b72b1abc5e57fda6647072a96 from idIndex" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.032876297Z" level=info msg="createCtr: removing container 70ffc7f39282e890f5d610a6d55742a39f611c4b72b1abc5e57fda6647072a96" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.032910812Z" level=info msg="createCtr: deleting container 70ffc7f39282e890f5d610a6d55742a39f611c4b72b1abc5e57fda6647072a96 from storage" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.03537319Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:37:37.830810    3511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:37.831343    3511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:37.832913    3511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:37.833362    3511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:37.835110    3511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:37:37 up  1:20,  0 user,  load average: 0.02, 0.04, 1.71
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.030421    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:28 ha-898615 kubelet[1937]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:28 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.030462    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	Oct 09 19:37:29 ha-898615 kubelet[1937]: E1009 19:37:29.717131    1937 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-898615.186ce986e63acb66  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-898615,UID:ha-898615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-898615 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-898615,},FirstTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,LastTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-898615,}"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.000922    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.020052    1937 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.030731    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:30 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:30 ha-898615 kubelet[1937]:  > podSandboxID="d0dc08f9b3192b73d8d0a9663be3c0a1eb6c98f898ae3f73ccefd6abc80c6b8e"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.030862    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:30 ha-898615 kubelet[1937]:         container kube-apiserver start failed in pod kube-apiserver-ha-898615_kube-system(aad558bc9f2efde8d3c90d373798de18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:30 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.030902    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-898615" podUID="aad558bc9f2efde8d3c90d373798de18"
	Oct 09 19:37:34 ha-898615 kubelet[1937]: E1009 19:37:34.644344    1937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:37:34 ha-898615 kubelet[1937]: I1009 19:37:34.814758    1937 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:37:34 ha-898615 kubelet[1937]: E1009 19:37:34.815194    1937 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:37:37 ha-898615 kubelet[1937]: E1009 19:37:37.000406    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:37 ha-898615 kubelet[1937]: E1009 19:37:37.035778    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:37 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:37 ha-898615 kubelet[1937]:  > podSandboxID="043d27dc8aae857d8a42667dfed1b409b78957e0ac42335f6d23fbc5540aedfd"
	Oct 09 19:37:37 ha-898615 kubelet[1937]: E1009 19:37:37.035886    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:37 ha-898615 kubelet[1937]:         container etcd start failed in pod etcd-ha-898615_kube-system(2d43b86ac03e93e11f35c923e2560103): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:37 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:37 ha-898615 kubelet[1937]: E1009 19:37:37.035919    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-898615" podUID="2d43b86ac03e93e11f35c923e2560103"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 6 (311.995623ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:38.228449  202901 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (1.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-898615" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-898615\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-898615\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-898615\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-898615" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-898615\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-898615\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-898615\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:27:46.220087387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39eaa0a5ee22c7c1e0e0329f3f944afa2ddef6cd571ebd9f0aa050805b81a54f",
	            "SandboxKey": "/var/run/docker/netns/39eaa0a5ee22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:31:dc:da:08:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "7930ac5db0626d63671f2a77753202141442de70603e1b4acf6213e0bd34944d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 6 (300.777243ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:38.878309  203160 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-158523 image ls --format json --alsologtostderr                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image build -t localhost/my-image:functional-158523 testdata/build --alsologtostderr          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls --format table --alsologtostderr                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ delete  │ -p functional-158523                                                                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │ 09 Oct 25 19:27 UTC │
	│ start   │ ha-898615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- rollout status deployment/busybox                                                          │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node add --alsologtostderr -v 5                                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:27:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:27:40.840216  194626 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:40.840464  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840473  194626 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:40.840478  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840669  194626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:27:40.841171  194626 out.go:368] Setting JSON to false
	I1009 19:27:40.842061  194626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4210,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:27:40.842167  194626 start.go:143] virtualization: kvm guest
	I1009 19:27:40.844410  194626 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:27:40.845759  194626 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:40.845801  194626 notify.go:221] Checking for updates...
	I1009 19:27:40.848757  194626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:40.850175  194626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:27:40.851491  194626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:27:40.852705  194626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:27:40.853892  194626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:40.855321  194626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:40.878925  194626 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:27:40.879089  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:40.938194  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.927865936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:40.938299  194626 docker.go:319] overlay module found
	I1009 19:27:40.940236  194626 out.go:179] * Using the docker driver based on user configuration
	I1009 19:27:40.941822  194626 start.go:309] selected driver: docker
	I1009 19:27:40.941843  194626 start.go:930] validating driver "docker" against <nil>
	I1009 19:27:40.941856  194626 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:40.942433  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:41.005454  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.995221815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:41.005685  194626 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 19:27:41.005966  194626 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:41.007942  194626 out.go:179] * Using Docker driver with root privileges
	I1009 19:27:41.009445  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:41.009493  194626 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:27:41.009501  194626 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:27:41.009585  194626 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 19:27:41.010949  194626 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:27:41.012190  194626 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:41.013322  194626 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:41.014388  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.014435  194626 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:27:41.014445  194626 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:41.014436  194626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:41.014539  194626 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:27:41.014554  194626 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:41.014900  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:41.014929  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json: {Name:mk248a27bef73ecdf1bd71857a83cb8ef52e81ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:41.034874  194626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:41.034900  194626 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:41.034920  194626 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:41.034961  194626 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:41.035085  194626 start.go:365] duration metric: took 102.16µs to acquireMachinesLock for "ha-898615"
	I1009 19:27:41.035119  194626 start.go:94] Provisioning new machine with config: &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:41.035201  194626 start.go:126] createHost starting for "" (driver="docker")
	I1009 19:27:41.037427  194626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:27:41.037659  194626 start.go:160] libmachine.API.Create for "ha-898615" (driver="docker")
	I1009 19:27:41.037695  194626 client.go:168] LocalClient.Create starting
	I1009 19:27:41.037764  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
	I1009 19:27:41.037804  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037821  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.037887  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
	I1009 19:27:41.037913  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037927  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.038348  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:27:41.055472  194626 cli_runner.go:211] docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:27:41.055548  194626 network_create.go:284] running [docker network inspect ha-898615] to gather additional debugging logs...
	I1009 19:27:41.055569  194626 cli_runner.go:164] Run: docker network inspect ha-898615
	W1009 19:27:41.074024  194626 cli_runner.go:211] docker network inspect ha-898615 returned with exit code 1
	I1009 19:27:41.074056  194626 network_create.go:287] error running [docker network inspect ha-898615]: docker network inspect ha-898615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-898615 not found
	I1009 19:27:41.074071  194626 network_create.go:289] output of [docker network inspect ha-898615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-898615 not found
	
	** /stderr **
	I1009 19:27:41.074158  194626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:41.091849  194626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a266e0}
	I1009 19:27:41.091899  194626 network_create.go:124] attempt to create docker network ha-898615 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 19:27:41.091955  194626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-898615 ha-898615
	I1009 19:27:41.153434  194626 network_create.go:108] docker network ha-898615 192.168.49.0/24 created
	I1009 19:27:41.153469  194626 kic.go:121] calculated static IP "192.168.49.2" for the "ha-898615" container
	I1009 19:27:41.153533  194626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:27:41.170560  194626 cli_runner.go:164] Run: docker volume create ha-898615 --label name.minikube.sigs.k8s.io=ha-898615 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:27:41.190645  194626 oci.go:103] Successfully created a docker volume ha-898615
	I1009 19:27:41.190737  194626 cli_runner.go:164] Run: docker run --rm --name ha-898615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --entrypoint /usr/bin/test -v ha-898615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:27:41.600938  194626 oci.go:107] Successfully prepared a docker volume ha-898615
	I1009 19:27:41.600967  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.600993  194626 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:27:41.601055  194626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:27:46.106521  194626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.505414435s)
	I1009 19:27:46.106562  194626 kic.go:203] duration metric: took 4.50556336s to extract preloaded images to volume ...
	W1009 19:27:46.106658  194626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 19:27:46.106697  194626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 19:27:46.106744  194626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:27:46.168307  194626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-898615 --name ha-898615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-898615 --network ha-898615 --ip 192.168.49.2 --volume ha-898615:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:27:46.438758  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Running}}
	I1009 19:27:46.461401  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.481444  194626 cli_runner.go:164] Run: docker exec ha-898615 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:27:46.527865  194626 oci.go:144] the created container "ha-898615" has a running status.
	I1009 19:27:46.527897  194626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa...
	I1009 19:27:46.644201  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:27:46.644249  194626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:27:46.674019  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.696195  194626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:27:46.696225  194626 kic_runner.go:114] Args: [docker exec --privileged ha-898615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:27:46.751886  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.774821  194626 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:46.774929  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.795147  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.795497  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.795517  194626 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:46.948463  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:46.948514  194626 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:27:46.948583  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.967551  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.967801  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.967821  194626 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:27:47.127119  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:47.127192  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.145629  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.145957  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.145990  194626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:47.294310  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:47.294342  194626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:27:47.294368  194626 ubuntu.go:190] setting up certificates
	I1009 19:27:47.294401  194626 provision.go:84] configureAuth start
	I1009 19:27:47.294454  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:47.312608  194626 provision.go:143] copyHostCerts
	I1009 19:27:47.312651  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312684  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:27:47.312731  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312817  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:27:47.312923  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.312952  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:27:47.312962  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.313014  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:27:47.313086  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313114  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:27:47.313124  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313163  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:27:47.313236  194626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:27:47.539620  194626 provision.go:177] copyRemoteCerts
	I1009 19:27:47.539697  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:47.539740  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.557819  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:47.662665  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:47.662781  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:47.683372  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:47.683457  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:47.701669  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:47.701736  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:47.720164  194626 provision.go:87] duration metric: took 425.744193ms to configureAuth
	I1009 19:27:47.720200  194626 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:47.720451  194626 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:47.720564  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.738437  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.738688  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.738714  194626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:48.000107  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:48.000134  194626 machine.go:96] duration metric: took 1.225290325s to provisionDockerMachine
	I1009 19:27:48.000148  194626 client.go:171] duration metric: took 6.962444548s to LocalClient.Create
	I1009 19:27:48.000174  194626 start.go:168] duration metric: took 6.962517267s to libmachine.API.Create "ha-898615"
	I1009 19:27:48.000183  194626 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:27:48.000199  194626 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:48.000266  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:48.000306  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.017815  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.123997  194626 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:48.127754  194626 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:48.127793  194626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:48.127806  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:27:48.127864  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:27:48.127968  194626 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:27:48.127983  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:27:48.128079  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:48.136280  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:48.157989  194626 start.go:297] duration metric: took 157.790225ms for postStartSetup
	I1009 19:27:48.158323  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.176302  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:48.176728  194626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:48.176785  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.195218  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.296952  194626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.301724  194626 start.go:129] duration metric: took 7.26650252s to createHost
	I1009 19:27:48.301754  194626 start.go:84] releasing machines lock for "ha-898615", held for 7.266653034s
	I1009 19:27:48.301837  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.319813  194626 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:48.319860  194626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.319866  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.319919  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.339000  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.339730  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.494727  194626 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:48.501535  194626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.537367  194626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.542466  194626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.542536  194626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.568823  194626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:27:48.568848  194626 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.568888  194626 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:27:48.568936  194626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.585765  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.598310  194626 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.598367  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.615925  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.634773  194626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:48.716369  194626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:48.805777  194626 docker.go:234] disabling docker service ...
	I1009 19:27:48.805849  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:48.826709  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:48.840263  194626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:48.923854  194626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.007492  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.020704  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.035117  194626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.035178  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.045691  194626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:27:49.045759  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.055054  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.064210  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.073237  194626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:49.081510  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.090399  194626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.104388  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.113513  194626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:49.121175  194626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:49.128977  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.201984  194626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:49.310640  194626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:49.310716  194626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:49.315028  194626 start.go:564] Will wait 60s for crictl version
	I1009 19:27:49.315098  194626 ssh_runner.go:195] Run: which crictl
	I1009 19:27:49.319134  194626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:49.344131  194626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:49.344210  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.374167  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.406880  194626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:49.408191  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:49.425730  194626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:49.430174  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.441269  194626 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:49.441374  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:49.441444  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.474537  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.474563  194626 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:49.474626  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.501408  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.501432  194626 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:49.501440  194626 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:49.501565  194626 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:49.501644  194626 ssh_runner.go:195] Run: crio config
	I1009 19:27:49.548225  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:49.548247  194626 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:27:49.548270  194626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:49.548295  194626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:49.548487  194626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:49.548516  194626 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:49.548561  194626 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:49.560569  194626 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:49.560666  194626 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:49.560713  194626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:49.568926  194626 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:49.569017  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:49.577516  194626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:49.590951  194626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:49.606324  194626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:27:49.618986  194626 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 19:27:49.633526  194626 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:49.637412  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.647415  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.727039  194626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:49.749827  194626 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:27:49.749848  194626 certs.go:195] generating shared ca certs ...
	I1009 19:27:49.749916  194626 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.750063  194626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:27:49.750105  194626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:27:49.750114  194626 certs.go:257] generating profile certs ...
	I1009 19:27:49.750166  194626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:27:49.750191  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt with IP's: []
	I1009 19:27:49.922286  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt ...
	I1009 19:27:49.922322  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt: {Name:mke5f0b5d846145d5885091c3fdef11a03b4705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923243  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key ...
	I1009 19:27:49.923266  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key: {Name:mkb5fe559278888df39f6f81eb44f11c5b40eebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923364  194626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38
	I1009 19:27:49.923404  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 19:27:50.174751  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 ...
	I1009 19:27:50.174785  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38: {Name:mk7158926fc69073b0db0c318bd9373ec8743788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.174962  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 ...
	I1009 19:27:50.174976  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38: {Name:mk3dc2ee28c7f607abddd15bf98fe46423492612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.175056  194626 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:27:50.175135  194626 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:27:50.175189  194626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:27:50.175204  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt with IP's: []
	I1009 19:27:50.225715  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt ...
	I1009 19:27:50.225750  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt: {Name:mk61103353c5c3a0cbf54b18a910b0c83f19ee7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.225929  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key ...
	I1009 19:27:50.225941  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key: {Name:mkbe3072f67f16772d02b0025253c832dee4c437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.226013  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.226031  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.226044  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.226054  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.226067  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.226077  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.226088  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.226099  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.226151  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:27:50.226183  194626 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.226195  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:27:50.226219  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.226240  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.226260  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:27:50.226298  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:50.226322  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.226336  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.226348  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.226853  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:50.245895  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:50.263570  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:50.281671  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:27:50.300223  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:27:50.317734  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:27:50.335066  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:50.352704  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:27:50.370437  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:27:50.389421  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:50.406983  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:27:50.425614  194626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:50.439040  194626 ssh_runner.go:195] Run: openssl version
	I1009 19:27:50.445304  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:50.454013  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457873  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457929  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.491581  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:50.500958  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:27:50.509597  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513479  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513541  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.548203  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:50.557261  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:27:50.565908  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570090  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570139  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.604463  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:50.613756  194626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:50.617456  194626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:27:50.617519  194626 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:50.617611  194626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:50.617699  194626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:50.651007  194626 cri.go:89] found id: ""
	I1009 19:27:50.651094  194626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:50.660625  194626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:27:50.669536  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:27:50.669585  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:27:50.678565  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:27:50.678583  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:27:50.678632  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:27:50.686673  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:27:50.686739  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:27:50.694481  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:27:50.702511  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:27:50.702571  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:27:50.711335  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.719367  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:27:50.719437  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.727079  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:27:50.735773  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:27:50.735828  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:27:50.743402  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:27:50.783476  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:27:50.783553  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:27:50.804988  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:27:50.805072  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:27:50.805120  194626 kubeadm.go:318] OS: Linux
	I1009 19:27:50.805170  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:27:50.805244  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:27:50.805294  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:27:50.805368  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:27:50.805464  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:27:50.805570  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:27:50.805644  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:27:50.805709  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:27:50.865026  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:27:50.865138  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:27:50.865228  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:27:50.872900  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:27:50.874725  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:27:50.874828  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:27:50.874946  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:27:51.069934  194626 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:27:51.167065  194626 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:27:51.280550  194626 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:27:51.481119  194626 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:27:51.788062  194626 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:27:51.788230  194626 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:51.872181  194626 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:27:51.872362  194626 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:52.084923  194626 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:27:52.283000  194626 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:27:52.633617  194626 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:27:52.633683  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:27:52.795142  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:27:52.946617  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:27:53.060032  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:27:53.125689  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:27:53.322626  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:27:53.323225  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:27:53.325508  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:27:53.328645  194626 out.go:252]   - Booting up control plane ...
	I1009 19:27:53.328749  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:27:53.328835  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:27:53.328914  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:27:53.343297  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:27:53.343474  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:27:53.350427  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:27:53.350584  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:27:53.350658  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:27:53.451355  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:27:53.451572  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:27:54.452237  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000953428s
	I1009 19:27:54.456489  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:27:54.456634  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:27:54.456766  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:27:54.456840  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:31:54.457899  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	I1009 19:31:54.458060  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	I1009 19:31:54.458160  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	I1009 19:31:54.458176  194626 kubeadm.go:318] 
	I1009 19:31:54.458302  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:31:54.458440  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:31:54.458603  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:31:54.458813  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:31:54.458922  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:31:54.459044  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:31:54.459063  194626 kubeadm.go:318] 
	I1009 19:31:54.462630  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:54.462803  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:31:54.463587  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1009 19:31:54.463708  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:31:54.463871  194626 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000953428s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:31:54.463985  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:31:57.247677  194626 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.783657743s)
	I1009 19:31:57.247781  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:31:57.261935  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:31:57.261998  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:31:57.270455  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:31:57.270478  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:31:57.270524  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:31:57.278767  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:31:57.278835  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:31:57.286752  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:31:57.295053  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:31:57.295113  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:31:57.303048  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.311557  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:31:57.311631  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.319853  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:31:57.328199  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:31:57.328265  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:31:57.336006  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:31:57.396540  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:57.457937  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:36:00.178531  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:36:00.178718  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:36:00.182333  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:36:00.182408  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:36:00.182502  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:36:00.182552  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:36:00.182582  194626 kubeadm.go:318] OS: Linux
	I1009 19:36:00.182645  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:36:00.182724  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:36:00.182769  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:36:00.182812  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:36:00.182853  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:36:00.182902  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:36:00.182967  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:36:00.183022  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:36:00.183086  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:36:00.183241  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:36:00.183374  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:36:00.183474  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:36:00.185572  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:36:00.185640  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:36:00.185695  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:36:00.185782  194626 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:36:00.185857  194626 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:36:00.185922  194626 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:36:00.185977  194626 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:36:00.186037  194626 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:36:00.186100  194626 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:36:00.186172  194626 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:36:00.186259  194626 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:36:00.186320  194626 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:36:00.186413  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:36:00.186457  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:36:00.186503  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:36:00.186567  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:36:00.186661  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:36:00.186746  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:36:00.186865  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:36:00.186965  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:36:00.188328  194626 out.go:252]   - Booting up control plane ...
	I1009 19:36:00.188439  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:36:00.188511  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:36:00.188565  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:36:00.188647  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:36:00.188734  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:36:00.188835  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:36:00.188946  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:36:00.188994  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:36:00.189107  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:36:00.189207  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:36:00.189257  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.196157ms
	I1009 19:36:00.189351  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:36:00.189534  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:36:00.189618  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:36:00.189686  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:36:00.189753  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	I1009 19:36:00.189820  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	I1009 19:36:00.189897  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	I1009 19:36:00.189910  194626 kubeadm.go:318] 
	I1009 19:36:00.190035  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:36:00.190115  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:36:00.190192  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:36:00.190268  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:36:00.190333  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:36:00.190440  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:36:00.190476  194626 kubeadm.go:318] 
	I1009 19:36:00.190536  194626 kubeadm.go:402] duration metric: took 8m9.573023353s to StartCluster
	I1009 19:36:00.190620  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:36:00.190688  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:36:00.221163  194626 cri.go:89] found id: ""
	I1009 19:36:00.221200  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.221211  194626 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:36:00.221218  194626 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:36:00.221282  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:36:00.248439  194626 cri.go:89] found id: ""
	I1009 19:36:00.248473  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.248486  194626 logs.go:284] No container was found matching "etcd"
	I1009 19:36:00.248498  194626 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:36:00.248564  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:36:00.275751  194626 cri.go:89] found id: ""
	I1009 19:36:00.275781  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.275792  194626 logs.go:284] No container was found matching "coredns"
	I1009 19:36:00.275801  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:36:00.275868  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:36:00.304183  194626 cri.go:89] found id: ""
	I1009 19:36:00.304218  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.304227  194626 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:36:00.304233  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:36:00.304286  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:36:00.333120  194626 cri.go:89] found id: ""
	I1009 19:36:00.333154  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.333164  194626 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:36:00.333171  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:36:00.333221  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:36:00.362504  194626 cri.go:89] found id: ""
	I1009 19:36:00.362527  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.362536  194626 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:36:00.362542  194626 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:36:00.362602  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:36:00.391912  194626 cri.go:89] found id: ""
	I1009 19:36:00.391939  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.391949  194626 logs.go:284] No container was found matching "kindnet"
	I1009 19:36:00.391964  194626 logs.go:123] Gathering logs for kubelet ...
	I1009 19:36:00.391982  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:36:00.460680  194626 logs.go:123] Gathering logs for dmesg ...
	I1009 19:36:00.460722  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:36:00.473600  194626 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:36:00.473634  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:36:00.538515  194626 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:36:00.538542  194626 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:36:00.538556  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:36:00.601750  194626 logs.go:123] Gathering logs for container status ...
	I1009 19:36:00.601793  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:36:00.631755  194626 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:36:00.631818  194626 out.go:285] * 
	W1009 19:36:00.631894  194626 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.631914  194626 out.go:285] * 
	W1009 19:36:00.633671  194626 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:36:00.637455  194626 out.go:203] 
	W1009 19:36:00.638829  194626 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.638866  194626 out.go:285] * 
	I1009 19:36:00.640615  194626 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.027573974Z" level=info msg="createCtr: removing container 36a51f78e539029fe2de362c3a84544e8cc3cf7dc7b666f7684fc54c2facb606" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.027610427Z" level=info msg="createCtr: deleting container 36a51f78e539029fe2de362c3a84544e8cc3cf7dc7b666f7684fc54c2facb606 from storage" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:28 ha-898615 crio[777]: time="2025-10-09T19:37:28.029921646Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=3dd12eb2-ee95-4368-8c8f-7a3fbed31ba9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.001545609Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=02d2fba2-28a5-471d-ae43-8b572455a98b name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.003762187Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=2a3b67b3-badd-4ded-96d4-4e4daa736fa3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.004896442Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-898615/kube-apiserver" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.005099326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.008541201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.008958606Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.026617624Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.027992361Z" level=info msg="createCtr: deleting container ID 46b6cad4087c1b4658ce50e3012b46105eb18113385d6e456b9b9934d03e532f from idIndex" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.02802929Z" level=info msg="createCtr: removing container 46b6cad4087c1b4658ce50e3012b46105eb18113385d6e456b9b9934d03e532f" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.028068885Z" level=info msg="createCtr: deleting container 46b6cad4087c1b4658ce50e3012b46105eb18113385d6e456b9b9934d03e532f from storage" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:30 ha-898615 crio[777]: time="2025-10-09T19:37:30.030415662Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-898615_kube-system_aad558bc9f2efde8d3c90d373798de18_0" id=d83213e0-a150-4723-bf95-313dfbf9cd89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.001021942Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=3f44f988-8639-488a-803b-9563d41e2ed7 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.002120014Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=7c5ce8fe-0779-4178-8c58-fac7fa186ba3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.0033072Z" level=info msg="Creating container: kube-system/etcd-ha-898615/etcd" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.003686801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.008487936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.009158437Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.031053012Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.032838277Z" level=info msg="createCtr: deleting container ID 70ffc7f39282e890f5d610a6d55742a39f611c4b72b1abc5e57fda6647072a96 from idIndex" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.032876297Z" level=info msg="createCtr: removing container 70ffc7f39282e890f5d610a6d55742a39f611c4b72b1abc5e57fda6647072a96" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.032910812Z" level=info msg="createCtr: deleting container 70ffc7f39282e890f5d610a6d55742a39f611c4b72b1abc5e57fda6647072a96 from storage" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.03537319Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:37:39.481512    3681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:39.482050    3681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:39.483689    3681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:39.484190    3681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:39.485927    3681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:37:39 up  1:20,  0 user,  load average: 0.02, 0.04, 1.71
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.030421    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:28 ha-898615 kubelet[1937]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:28 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:28 ha-898615 kubelet[1937]: E1009 19:37:28.030462    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	Oct 09 19:37:29 ha-898615 kubelet[1937]: E1009 19:37:29.717131    1937 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-898615.186ce986e63acb66  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-898615,UID:ha-898615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-898615 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-898615,},FirstTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,LastTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-898615,}"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.000922    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.020052    1937 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.030731    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:30 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:30 ha-898615 kubelet[1937]:  > podSandboxID="d0dc08f9b3192b73d8d0a9663be3c0a1eb6c98f898ae3f73ccefd6abc80c6b8e"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.030862    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:30 ha-898615 kubelet[1937]:         container kube-apiserver start failed in pod kube-apiserver-ha-898615_kube-system(aad558bc9f2efde8d3c90d373798de18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:30 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:30 ha-898615 kubelet[1937]: E1009 19:37:30.030902    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-898615" podUID="aad558bc9f2efde8d3c90d373798de18"
	Oct 09 19:37:34 ha-898615 kubelet[1937]: E1009 19:37:34.644344    1937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:37:34 ha-898615 kubelet[1937]: I1009 19:37:34.814758    1937 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:37:34 ha-898615 kubelet[1937]: E1009 19:37:34.815194    1937 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:37:37 ha-898615 kubelet[1937]: E1009 19:37:37.000406    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:37 ha-898615 kubelet[1937]: E1009 19:37:37.035778    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:37 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:37 ha-898615 kubelet[1937]:  > podSandboxID="043d27dc8aae857d8a42667dfed1b409b78957e0ac42335f6d23fbc5540aedfd"
	Oct 09 19:37:37 ha-898615 kubelet[1937]: E1009 19:37:37.035886    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:37 ha-898615 kubelet[1937]:         container etcd start failed in pod etcd-ha-898615_kube-system(2d43b86ac03e93e11f35c923e2560103): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:37 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:37 ha-898615 kubelet[1937]: E1009 19:37:37.035919    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-898615" podUID="2d43b86ac03e93e11f35c923e2560103"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 6 (311.155439ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:39.873034  203501 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 status --output json --alsologtostderr -v 5: exit status 6 (306.564378ms)

                                                
                                                
-- stdout --
	{"Name":"ha-898615","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:37:39.936625  203612 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:37:39.936917  203612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:39.936928  203612 out.go:374] Setting ErrFile to fd 2...
	I1009 19:37:39.936934  203612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:39.937130  203612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:37:39.937300  203612 out.go:368] Setting JSON to true
	I1009 19:37:39.937328  203612 mustload.go:65] Loading cluster: ha-898615
	I1009 19:37:39.937414  203612 notify.go:221] Checking for updates...
	I1009 19:37:39.937840  203612 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:37:39.937861  203612 status.go:174] checking status of ha-898615 ...
	I1009 19:37:39.938471  203612 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:37:39.959404  203612 status.go:371] ha-898615 host status = "Running" (err=<nil>)
	I1009 19:37:39.959433  203612 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:37:39.959669  203612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:37:39.978002  203612 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:37:39.978371  203612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:37:39.978453  203612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:37:39.996432  203612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:37:40.101768  203612 ssh_runner.go:195] Run: systemctl --version
	I1009 19:37:40.108426  203612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:37:40.121113  203612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:37:40.179913  203612 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:37:40.170175158 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 19:37:40.180433  203612 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:37:40.180462  203612 api_server.go:166] Checking apiserver status ...
	I1009 19:37:40.180497  203612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 19:37:40.190969  203612 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:37:40.190995  203612 status.go:463] ha-898615 apiserver status = Running (err=<nil>)
	I1009 19:37:40.191005  203612 status.go:176] ha-898615 status: &{Name:ha-898615 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:330: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-898615 status --output json --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:27:46.220087387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39eaa0a5ee22c7c1e0e0329f3f944afa2ddef6cd571ebd9f0aa050805b81a54f",
	            "SandboxKey": "/var/run/docker/netns/39eaa0a5ee22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:31:dc:da:08:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "7930ac5db0626d63671f2a77753202141442de70603e1b4acf6213e0bd34944d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 6 (304.970003ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:40.505193  203740 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-158523 image ls --format json --alsologtostderr                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image build -t localhost/my-image:functional-158523 testdata/build --alsologtostderr          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls --format table --alsologtostderr                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ delete  │ -p functional-158523                                                                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │ 09 Oct 25 19:27 UTC │
	│ start   │ ha-898615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- rollout status deployment/busybox                                                          │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node add --alsologtostderr -v 5                                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:27:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:27:40.840216  194626 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:40.840464  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840473  194626 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:40.840478  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840669  194626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:27:40.841171  194626 out.go:368] Setting JSON to false
	I1009 19:27:40.842061  194626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4210,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:27:40.842167  194626 start.go:143] virtualization: kvm guest
	I1009 19:27:40.844410  194626 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:27:40.845759  194626 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:40.845801  194626 notify.go:221] Checking for updates...
	I1009 19:27:40.848757  194626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:40.850175  194626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:27:40.851491  194626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:27:40.852705  194626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:27:40.853892  194626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:40.855321  194626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:40.878925  194626 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:27:40.879089  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:40.938194  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.927865936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:40.938299  194626 docker.go:319] overlay module found
	I1009 19:27:40.940236  194626 out.go:179] * Using the docker driver based on user configuration
	I1009 19:27:40.941822  194626 start.go:309] selected driver: docker
	I1009 19:27:40.941843  194626 start.go:930] validating driver "docker" against <nil>
	I1009 19:27:40.941856  194626 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:40.942433  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:41.005454  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.995221815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:41.005685  194626 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 19:27:41.005966  194626 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:41.007942  194626 out.go:179] * Using Docker driver with root privileges
	I1009 19:27:41.009445  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:41.009493  194626 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:27:41.009501  194626 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:27:41.009585  194626 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 19:27:41.010949  194626 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:27:41.012190  194626 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:41.013322  194626 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:41.014388  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.014435  194626 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:27:41.014445  194626 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:41.014436  194626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:41.014539  194626 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:27:41.014554  194626 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:41.014900  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:41.014929  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json: {Name:mk248a27bef73ecdf1bd71857a83cb8ef52e81ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:41.034874  194626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:41.034900  194626 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:41.034920  194626 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:41.034961  194626 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:41.035085  194626 start.go:365] duration metric: took 102.16µs to acquireMachinesLock for "ha-898615"
	I1009 19:27:41.035119  194626 start.go:94] Provisioning new machine with config: &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:41.035201  194626 start.go:126] createHost starting for "" (driver="docker")
	I1009 19:27:41.037427  194626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:27:41.037659  194626 start.go:160] libmachine.API.Create for "ha-898615" (driver="docker")
	I1009 19:27:41.037695  194626 client.go:168] LocalClient.Create starting
	I1009 19:27:41.037764  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
	I1009 19:27:41.037804  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037821  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.037887  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
	I1009 19:27:41.037913  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037927  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.038348  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:27:41.055472  194626 cli_runner.go:211] docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:27:41.055548  194626 network_create.go:284] running [docker network inspect ha-898615] to gather additional debugging logs...
	I1009 19:27:41.055569  194626 cli_runner.go:164] Run: docker network inspect ha-898615
	W1009 19:27:41.074024  194626 cli_runner.go:211] docker network inspect ha-898615 returned with exit code 1
	I1009 19:27:41.074056  194626 network_create.go:287] error running [docker network inspect ha-898615]: docker network inspect ha-898615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-898615 not found
	I1009 19:27:41.074071  194626 network_create.go:289] output of [docker network inspect ha-898615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-898615 not found
	
	** /stderr **
	I1009 19:27:41.074158  194626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:41.091849  194626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a266e0}
	I1009 19:27:41.091899  194626 network_create.go:124] attempt to create docker network ha-898615 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 19:27:41.091955  194626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-898615 ha-898615
	I1009 19:27:41.153434  194626 network_create.go:108] docker network ha-898615 192.168.49.0/24 created
	I1009 19:27:41.153469  194626 kic.go:121] calculated static IP "192.168.49.2" for the "ha-898615" container
	I1009 19:27:41.153533  194626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:27:41.170560  194626 cli_runner.go:164] Run: docker volume create ha-898615 --label name.minikube.sigs.k8s.io=ha-898615 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:27:41.190645  194626 oci.go:103] Successfully created a docker volume ha-898615
	I1009 19:27:41.190737  194626 cli_runner.go:164] Run: docker run --rm --name ha-898615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --entrypoint /usr/bin/test -v ha-898615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:27:41.600938  194626 oci.go:107] Successfully prepared a docker volume ha-898615
	I1009 19:27:41.600967  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.600993  194626 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:27:41.601055  194626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:27:46.106521  194626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.505414435s)
	I1009 19:27:46.106562  194626 kic.go:203] duration metric: took 4.50556336s to extract preloaded images to volume ...
	W1009 19:27:46.106658  194626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 19:27:46.106697  194626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 19:27:46.106744  194626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:27:46.168307  194626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-898615 --name ha-898615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-898615 --network ha-898615 --ip 192.168.49.2 --volume ha-898615:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:27:46.438758  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Running}}
	I1009 19:27:46.461401  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.481444  194626 cli_runner.go:164] Run: docker exec ha-898615 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:27:46.527865  194626 oci.go:144] the created container "ha-898615" has a running status.
	I1009 19:27:46.527897  194626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa...
	I1009 19:27:46.644201  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:27:46.644249  194626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:27:46.674019  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.696195  194626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:27:46.696225  194626 kic_runner.go:114] Args: [docker exec --privileged ha-898615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:27:46.751886  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.774821  194626 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:46.774929  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.795147  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.795497  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.795517  194626 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:46.948463  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:46.948514  194626 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:27:46.948583  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.967551  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.967801  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.967821  194626 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:27:47.127119  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:47.127192  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.145629  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.145957  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.145990  194626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:47.294310  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:47.294342  194626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:27:47.294368  194626 ubuntu.go:190] setting up certificates
	I1009 19:27:47.294401  194626 provision.go:84] configureAuth start
	I1009 19:27:47.294454  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:47.312608  194626 provision.go:143] copyHostCerts
	I1009 19:27:47.312651  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312684  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:27:47.312731  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312817  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:27:47.312923  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.312952  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:27:47.312962  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.313014  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:27:47.313086  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313114  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:27:47.313124  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313163  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:27:47.313236  194626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:27:47.539620  194626 provision.go:177] copyRemoteCerts
	I1009 19:27:47.539697  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:47.539740  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.557819  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:47.662665  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:47.662781  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:47.683372  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:47.683457  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:47.701669  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:47.701736  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:47.720164  194626 provision.go:87] duration metric: took 425.744193ms to configureAuth
	I1009 19:27:47.720200  194626 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:47.720451  194626 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:47.720564  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.738437  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.738688  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.738714  194626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:48.000107  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:48.000134  194626 machine.go:96] duration metric: took 1.225290325s to provisionDockerMachine
	I1009 19:27:48.000148  194626 client.go:171] duration metric: took 6.962444548s to LocalClient.Create
	I1009 19:27:48.000174  194626 start.go:168] duration metric: took 6.962517267s to libmachine.API.Create "ha-898615"
	I1009 19:27:48.000183  194626 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:27:48.000199  194626 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:48.000266  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:48.000306  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.017815  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.123997  194626 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:48.127754  194626 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:48.127793  194626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:48.127806  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:27:48.127864  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:27:48.127968  194626 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:27:48.127983  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:27:48.128079  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:48.136280  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:48.157989  194626 start.go:297] duration metric: took 157.790225ms for postStartSetup
	I1009 19:27:48.158323  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.176302  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:48.176728  194626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:48.176785  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.195218  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.296952  194626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.301724  194626 start.go:129] duration metric: took 7.26650252s to createHost
	I1009 19:27:48.301754  194626 start.go:84] releasing machines lock for "ha-898615", held for 7.266653034s
	I1009 19:27:48.301837  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.319813  194626 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:48.319860  194626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.319866  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.319919  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.339000  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.339730  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.494727  194626 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:48.501535  194626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.537367  194626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.542466  194626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.542536  194626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.568823  194626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:27:48.568848  194626 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.568888  194626 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:27:48.568936  194626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.585765  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.598310  194626 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.598367  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.615925  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.634773  194626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:48.716369  194626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:48.805777  194626 docker.go:234] disabling docker service ...
	I1009 19:27:48.805849  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:48.826709  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:48.840263  194626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:48.923854  194626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.007492  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.020704  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.035117  194626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.035178  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.045691  194626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:27:49.045759  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.055054  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.064210  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.073237  194626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:49.081510  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.090399  194626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.104388  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.113513  194626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:49.121175  194626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:49.128977  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.201984  194626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:49.310640  194626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:49.310716  194626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:49.315028  194626 start.go:564] Will wait 60s for crictl version
	I1009 19:27:49.315098  194626 ssh_runner.go:195] Run: which crictl
	I1009 19:27:49.319134  194626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:49.344131  194626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:49.344210  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.374167  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.406880  194626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:49.408191  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:49.425730  194626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:49.430174  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.441269  194626 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:49.441374  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:49.441444  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.474537  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.474563  194626 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:49.474626  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.501408  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.501432  194626 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:49.501440  194626 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:49.501565  194626 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:49.501644  194626 ssh_runner.go:195] Run: crio config
	I1009 19:27:49.548225  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:49.548247  194626 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:27:49.548270  194626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:49.548295  194626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:49.548487  194626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:49.548516  194626 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:49.548561  194626 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:49.560569  194626 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:49.560666  194626 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:49.560713  194626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:49.568926  194626 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:49.569017  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:49.577516  194626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:49.590951  194626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:49.606324  194626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:27:49.618986  194626 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 19:27:49.633526  194626 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:49.637412  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.647415  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.727039  194626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:49.749827  194626 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:27:49.749848  194626 certs.go:195] generating shared ca certs ...
	I1009 19:27:49.749916  194626 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.750063  194626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:27:49.750105  194626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:27:49.750114  194626 certs.go:257] generating profile certs ...
	I1009 19:27:49.750166  194626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:27:49.750191  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt with IP's: []
	I1009 19:27:49.922286  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt ...
	I1009 19:27:49.922322  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt: {Name:mke5f0b5d846145d5885091c3fdef11a03b4705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923243  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key ...
	I1009 19:27:49.923266  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key: {Name:mkb5fe559278888df39f6f81eb44f11c5b40eebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923364  194626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38
	I1009 19:27:49.923404  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 19:27:50.174751  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 ...
	I1009 19:27:50.174785  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38: {Name:mk7158926fc69073b0db0c318bd9373ec8743788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.174962  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 ...
	I1009 19:27:50.174976  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38: {Name:mk3dc2ee28c7f607abddd15bf98fe46423492612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.175056  194626 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:27:50.175135  194626 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:27:50.175189  194626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:27:50.175204  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt with IP's: []
	I1009 19:27:50.225715  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt ...
	I1009 19:27:50.225750  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt: {Name:mk61103353c5c3a0cbf54b18a910b0c83f19ee7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.225929  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key ...
	I1009 19:27:50.225941  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key: {Name:mkbe3072f67f16772d02b0025253c832dee4c437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.226013  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.226031  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.226044  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.226054  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.226067  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.226077  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.226088  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.226099  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.226151  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:27:50.226183  194626 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.226195  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:27:50.226219  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.226240  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.226260  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:27:50.226298  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:50.226322  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.226336  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.226348  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.226853  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:50.245895  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:50.263570  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:50.281671  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:27:50.300223  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:27:50.317734  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:27:50.335066  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:50.352704  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:27:50.370437  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:27:50.389421  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:50.406983  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:27:50.425614  194626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:50.439040  194626 ssh_runner.go:195] Run: openssl version
	I1009 19:27:50.445304  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:50.454013  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457873  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457929  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.491581  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:50.500958  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:27:50.509597  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513479  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513541  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.548203  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:50.557261  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:27:50.565908  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570090  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570139  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.604463  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:50.613756  194626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:50.617456  194626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:27:50.617519  194626 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:50.617611  194626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:50.617699  194626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:50.651007  194626 cri.go:89] found id: ""
	I1009 19:27:50.651094  194626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:50.660625  194626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:27:50.669536  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:27:50.669585  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:27:50.678565  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:27:50.678583  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:27:50.678632  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:27:50.686673  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:27:50.686739  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:27:50.694481  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:27:50.702511  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:27:50.702571  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:27:50.711335  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.719367  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:27:50.719437  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.727079  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:27:50.735773  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:27:50.735828  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:27:50.743402  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:27:50.783476  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:27:50.783553  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:27:50.804988  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:27:50.805072  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:27:50.805120  194626 kubeadm.go:318] OS: Linux
	I1009 19:27:50.805170  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:27:50.805244  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:27:50.805294  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:27:50.805368  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:27:50.805464  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:27:50.805570  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:27:50.805644  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:27:50.805709  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:27:50.865026  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:27:50.865138  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:27:50.865228  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:27:50.872900  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:27:50.874725  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:27:50.874828  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:27:50.874946  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:27:51.069934  194626 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:27:51.167065  194626 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:27:51.280550  194626 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:27:51.481119  194626 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:27:51.788062  194626 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:27:51.788230  194626 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:51.872181  194626 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:27:51.872362  194626 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:52.084923  194626 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:27:52.283000  194626 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:27:52.633617  194626 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:27:52.633683  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:27:52.795142  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:27:52.946617  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:27:53.060032  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:27:53.125689  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:27:53.322626  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:27:53.323225  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:27:53.325508  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:27:53.328645  194626 out.go:252]   - Booting up control plane ...
	I1009 19:27:53.328749  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:27:53.328835  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:27:53.328914  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:27:53.343297  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:27:53.343474  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:27:53.350427  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:27:53.350584  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:27:53.350658  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:27:53.451355  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:27:53.451572  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:27:54.452237  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000953428s
	I1009 19:27:54.456489  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:27:54.456634  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:27:54.456766  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:27:54.456840  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:31:54.457899  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	I1009 19:31:54.458060  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	I1009 19:31:54.458160  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	I1009 19:31:54.458176  194626 kubeadm.go:318] 
	I1009 19:31:54.458302  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:31:54.458440  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:31:54.458603  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:31:54.458813  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:31:54.458922  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:31:54.459044  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:31:54.459063  194626 kubeadm.go:318] 
	I1009 19:31:54.462630  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:54.462803  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:31:54.463587  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1009 19:31:54.463708  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:31:54.463871  194626 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000953428s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:31:54.463985  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:31:57.247677  194626 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.783657743s)
	I1009 19:31:57.247781  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:31:57.261935  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:31:57.261998  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:31:57.270455  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:31:57.270478  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:31:57.270524  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:31:57.278767  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:31:57.278835  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:31:57.286752  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:31:57.295053  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:31:57.295113  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:31:57.303048  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.311557  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:31:57.311631  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.319853  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:31:57.328199  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:31:57.328265  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:31:57.336006  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:31:57.396540  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:57.457937  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:36:00.178531  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:36:00.178718  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:36:00.182333  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:36:00.182408  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:36:00.182502  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:36:00.182552  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:36:00.182582  194626 kubeadm.go:318] OS: Linux
	I1009 19:36:00.182645  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:36:00.182724  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:36:00.182769  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:36:00.182812  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:36:00.182853  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:36:00.182902  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:36:00.182967  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:36:00.183022  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:36:00.183086  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:36:00.183241  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:36:00.183374  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:36:00.183474  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:36:00.185572  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:36:00.185640  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:36:00.185695  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:36:00.185782  194626 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:36:00.185857  194626 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:36:00.185922  194626 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:36:00.185977  194626 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:36:00.186037  194626 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:36:00.186100  194626 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:36:00.186172  194626 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:36:00.186259  194626 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:36:00.186320  194626 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:36:00.186413  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:36:00.186457  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:36:00.186503  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:36:00.186567  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:36:00.186661  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:36:00.186746  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:36:00.186865  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:36:00.186965  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:36:00.188328  194626 out.go:252]   - Booting up control plane ...
	I1009 19:36:00.188439  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:36:00.188511  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:36:00.188565  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:36:00.188647  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:36:00.188734  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:36:00.188835  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:36:00.188946  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:36:00.188994  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:36:00.189107  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:36:00.189207  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:36:00.189257  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.196157ms
	I1009 19:36:00.189351  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:36:00.189534  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:36:00.189618  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:36:00.189686  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:36:00.189753  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	I1009 19:36:00.189820  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	I1009 19:36:00.189897  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	I1009 19:36:00.189910  194626 kubeadm.go:318] 
	I1009 19:36:00.190035  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:36:00.190115  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:36:00.190192  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:36:00.190268  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:36:00.190333  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:36:00.190440  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:36:00.190476  194626 kubeadm.go:318] 
	I1009 19:36:00.190536  194626 kubeadm.go:402] duration metric: took 8m9.573023353s to StartCluster
	I1009 19:36:00.190620  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:36:00.190688  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:36:00.221163  194626 cri.go:89] found id: ""
	I1009 19:36:00.221200  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.221211  194626 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:36:00.221218  194626 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:36:00.221282  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:36:00.248439  194626 cri.go:89] found id: ""
	I1009 19:36:00.248473  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.248486  194626 logs.go:284] No container was found matching "etcd"
	I1009 19:36:00.248498  194626 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:36:00.248564  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:36:00.275751  194626 cri.go:89] found id: ""
	I1009 19:36:00.275781  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.275792  194626 logs.go:284] No container was found matching "coredns"
	I1009 19:36:00.275801  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:36:00.275868  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:36:00.304183  194626 cri.go:89] found id: ""
	I1009 19:36:00.304218  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.304227  194626 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:36:00.304233  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:36:00.304286  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:36:00.333120  194626 cri.go:89] found id: ""
	I1009 19:36:00.333154  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.333164  194626 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:36:00.333171  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:36:00.333221  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:36:00.362504  194626 cri.go:89] found id: ""
	I1009 19:36:00.362527  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.362536  194626 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:36:00.362542  194626 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:36:00.362602  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:36:00.391912  194626 cri.go:89] found id: ""
	I1009 19:36:00.391939  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.391949  194626 logs.go:284] No container was found matching "kindnet"
	I1009 19:36:00.391964  194626 logs.go:123] Gathering logs for kubelet ...
	I1009 19:36:00.391982  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:36:00.460680  194626 logs.go:123] Gathering logs for dmesg ...
	I1009 19:36:00.460722  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:36:00.473600  194626 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:36:00.473634  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:36:00.538515  194626 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:36:00.538542  194626 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:36:00.538556  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:36:00.601750  194626 logs.go:123] Gathering logs for container status ...
	I1009 19:36:00.601793  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:36:00.631755  194626 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:36:00.631818  194626 out.go:285] * 
	W1009 19:36:00.631894  194626 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.631914  194626 out.go:285] * 
	W1009 19:36:00.633671  194626 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:36:00.637455  194626 out.go:203] 
	W1009 19:36:00.638829  194626 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.638866  194626 out.go:285] * 
	I1009 19:36:00.640615  194626 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.003686801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.008487936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.009158437Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.031053012Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.032838277Z" level=info msg="createCtr: deleting container ID 70ffc7f39282e890f5d610a6d55742a39f611c4b72b1abc5e57fda6647072a96 from idIndex" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.032876297Z" level=info msg="createCtr: removing container 70ffc7f39282e890f5d610a6d55742a39f611c4b72b1abc5e57fda6647072a96" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.032910812Z" level=info msg="createCtr: deleting container 70ffc7f39282e890f5d610a6d55742a39f611c4b72b1abc5e57fda6647072a96 from storage" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.03537319Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.001285127Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=c02e9596-6e86-4723-9fa0-0c1a59b8ec11 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.002863313Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=7ee67851-e858-47a9-b6e5-3f32f6c3c4d6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.003889028Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-898615/kube-scheduler" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.004176819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.00769631Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.008275988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.030838786Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.032303179Z" level=info msg="createCtr: deleting container ID de42baf3157891d1e94f492d869745ae3dbace262ec6d2110858aae19b9d4966 from idIndex" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.032342772Z" level=info msg="createCtr: removing container de42baf3157891d1e94f492d869745ae3dbace262ec6d2110858aae19b9d4966" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.032389733Z" level=info msg="createCtr: deleting container de42baf3157891d1e94f492d869745ae3dbace262ec6d2110858aae19b9d4966 from storage" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.034633841Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-898615_kube-system_bc0e16a1814c5389485436acdfc968ed_0" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.000987566Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5e0f4023-c048-4d45-8521-b7546432f5a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.002003447Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=ebe042e5-2717-438c-a7dc-ab7e11cf19ab name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.003011113Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-898615/kube-controller-manager" id=6811f773-690c-437e-9267-8e4e5d88e7a9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.0032297Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.006977689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.007515147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:37:41.115699    3860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:41.116249    3860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:41.117906    3860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:41.118422    3860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:41.119636    3860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:37:41 up  1:20,  0 user,  load average: 0.02, 0.04, 1.71
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:37:37 ha-898615 kubelet[1937]: E1009 19:37:37.035778    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:37 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:37 ha-898615 kubelet[1937]:  > podSandboxID="043d27dc8aae857d8a42667dfed1b409b78957e0ac42335f6d23fbc5540aedfd"
	Oct 09 19:37:37 ha-898615 kubelet[1937]: E1009 19:37:37.035886    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:37 ha-898615 kubelet[1937]:         container etcd start failed in pod etcd-ha-898615_kube-system(2d43b86ac03e93e11f35c923e2560103): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:37 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:37 ha-898615 kubelet[1937]: E1009 19:37:37.035919    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-898615" podUID="2d43b86ac03e93e11f35c923e2560103"
	Oct 09 19:37:39 ha-898615 kubelet[1937]: E1009 19:37:39.717957    1937 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-898615.186ce986e63acb66  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-898615,UID:ha-898615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-898615 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-898615,},FirstTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,LastTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-898615,}"
	Oct 09 19:37:40 ha-898615 kubelet[1937]: E1009 19:37:40.000749    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:40 ha-898615 kubelet[1937]: E1009 19:37:40.021145    1937 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	Oct 09 19:37:40 ha-898615 kubelet[1937]: E1009 19:37:40.034968    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:40 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:40 ha-898615 kubelet[1937]:  > podSandboxID="d9cf0054a77eb17087a85fb70ade0aa16f7510c69fabd94329449a3f5ee8df1b"
	Oct 09 19:37:40 ha-898615 kubelet[1937]: E1009 19:37:40.035092    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:40 ha-898615 kubelet[1937]:         container kube-scheduler start failed in pod kube-scheduler-ha-898615_kube-system(bc0e16a1814c5389485436acdfc968ed): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:40 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:40 ha-898615 kubelet[1937]: E1009 19:37:40.035140    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-898615" podUID="bc0e16a1814c5389485436acdfc968ed"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.000504    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.027354    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:41 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:41 ha-898615 kubelet[1937]:  > podSandboxID="da9ccac0760165455089feb0a833fa92808edbd25f27148d0daf896a8d20b03b"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.027526    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:41 ha-898615 kubelet[1937]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:41 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.027599    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 6 (303.43046ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:41.496594  204068 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (1.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 node stop m02 --alsologtostderr -v 5: exit status 85 (92.452782ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:37:41.558801  204183 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:37:41.559117  204183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:41.559129  204183 out.go:374] Setting ErrFile to fd 2...
	I1009 19:37:41.559135  204183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:41.559356  204183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:37:41.559677  204183 mustload.go:65] Loading cluster: ha-898615
	I1009 19:37:41.560079  204183 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:37:41.562020  204183 out.go:203] 
	W1009 19:37:41.563335  204183 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1009 19:37:41.563351  204183 out.go:285] * 
	* 
	W1009 19:37:41.598284  204183 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:37:41.599869  204183 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-898615 node stop m02 --alsologtostderr -v 5": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5: exit status 6 (299.777974ms)

                                                
                                                
-- stdout --
	ha-898615
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:37:41.651422  204194 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:37:41.651677  204194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:41.651686  204194 out.go:374] Setting ErrFile to fd 2...
	I1009 19:37:41.651691  204194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:41.651903  204194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:37:41.652075  204194 out.go:368] Setting JSON to false
	I1009 19:37:41.652100  204194 mustload.go:65] Loading cluster: ha-898615
	I1009 19:37:41.652237  204194 notify.go:221] Checking for updates...
	I1009 19:37:41.652534  204194 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:37:41.652556  204194 status.go:174] checking status of ha-898615 ...
	I1009 19:37:41.653096  204194 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:37:41.672295  204194 status.go:371] ha-898615 host status = "Running" (err=<nil>)
	I1009 19:37:41.672324  204194 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:37:41.672609  204194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:37:41.690623  204194 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:37:41.691089  204194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:37:41.691160  204194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:37:41.710442  204194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:37:41.811838  204194 ssh_runner.go:195] Run: systemctl --version
	I1009 19:37:41.818896  204194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:37:41.831812  204194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:37:41.887812  204194 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:37:41.877661191 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 19:37:41.888289  204194 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:37:41.888326  204194 api_server.go:166] Checking apiserver status ...
	I1009 19:37:41.888372  204194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 19:37:41.899292  204194 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:37:41.899321  204194 status.go:463] ha-898615 apiserver status = Running (err=<nil>)
	I1009 19:37:41.899338  204194 status.go:176] ha-898615 status: &{Name:ha-898615 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:374: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:27:46.220087387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39eaa0a5ee22c7c1e0e0329f3f944afa2ddef6cd571ebd9f0aa050805b81a54f",
	            "SandboxKey": "/var/run/docker/netns/39eaa0a5ee22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:31:dc:da:08:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "7930ac5db0626d63671f2a77753202141442de70603e1b4acf6213e0bd34944d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 6 (299.252537ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:42.206954  204318 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-158523 image build -t localhost/my-image:functional-158523 testdata/build --alsologtostderr          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls --format table --alsologtostderr                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ delete  │ -p functional-158523                                                                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │ 09 Oct 25 19:27 UTC │
	│ start   │ ha-898615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- rollout status deployment/busybox                                                          │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node add --alsologtostderr -v 5                                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node stop m02 --alsologtostderr -v 5                                                                  │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:27:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:27:40.840216  194626 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:40.840464  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840473  194626 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:40.840478  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840669  194626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:27:40.841171  194626 out.go:368] Setting JSON to false
	I1009 19:27:40.842061  194626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4210,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:27:40.842167  194626 start.go:143] virtualization: kvm guest
	I1009 19:27:40.844410  194626 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:27:40.845759  194626 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:40.845801  194626 notify.go:221] Checking for updates...
	I1009 19:27:40.848757  194626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:40.850175  194626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:27:40.851491  194626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:27:40.852705  194626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:27:40.853892  194626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:40.855321  194626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:40.878925  194626 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:27:40.879089  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:40.938194  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.927865936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:40.938299  194626 docker.go:319] overlay module found
	I1009 19:27:40.940236  194626 out.go:179] * Using the docker driver based on user configuration
	I1009 19:27:40.941822  194626 start.go:309] selected driver: docker
	I1009 19:27:40.941843  194626 start.go:930] validating driver "docker" against <nil>
	I1009 19:27:40.941856  194626 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:40.942433  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:41.005454  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.995221815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:41.005685  194626 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 19:27:41.005966  194626 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:41.007942  194626 out.go:179] * Using Docker driver with root privileges
	I1009 19:27:41.009445  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:41.009493  194626 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:27:41.009501  194626 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:27:41.009585  194626 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 19:27:41.010949  194626 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:27:41.012190  194626 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:41.013322  194626 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:41.014388  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.014435  194626 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:27:41.014445  194626 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:41.014436  194626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:41.014539  194626 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:27:41.014554  194626 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:41.014900  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:41.014929  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json: {Name:mk248a27bef73ecdf1bd71857a83cb8ef52e81ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:41.034874  194626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:41.034900  194626 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:41.034920  194626 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:41.034961  194626 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:41.035085  194626 start.go:365] duration metric: took 102.16µs to acquireMachinesLock for "ha-898615"
	I1009 19:27:41.035119  194626 start.go:94] Provisioning new machine with config: &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:41.035201  194626 start.go:126] createHost starting for "" (driver="docker")
	I1009 19:27:41.037427  194626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:27:41.037659  194626 start.go:160] libmachine.API.Create for "ha-898615" (driver="docker")
	I1009 19:27:41.037695  194626 client.go:168] LocalClient.Create starting
	I1009 19:27:41.037764  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
	I1009 19:27:41.037804  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037821  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.037887  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
	I1009 19:27:41.037913  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037927  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.038348  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:27:41.055472  194626 cli_runner.go:211] docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:27:41.055548  194626 network_create.go:284] running [docker network inspect ha-898615] to gather additional debugging logs...
	I1009 19:27:41.055569  194626 cli_runner.go:164] Run: docker network inspect ha-898615
	W1009 19:27:41.074024  194626 cli_runner.go:211] docker network inspect ha-898615 returned with exit code 1
	I1009 19:27:41.074056  194626 network_create.go:287] error running [docker network inspect ha-898615]: docker network inspect ha-898615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-898615 not found
	I1009 19:27:41.074071  194626 network_create.go:289] output of [docker network inspect ha-898615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-898615 not found
	
	** /stderr **
	I1009 19:27:41.074158  194626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:41.091849  194626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a266e0}
	I1009 19:27:41.091899  194626 network_create.go:124] attempt to create docker network ha-898615 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 19:27:41.091955  194626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-898615 ha-898615
	I1009 19:27:41.153434  194626 network_create.go:108] docker network ha-898615 192.168.49.0/24 created
	I1009 19:27:41.153469  194626 kic.go:121] calculated static IP "192.168.49.2" for the "ha-898615" container
	I1009 19:27:41.153533  194626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:27:41.170560  194626 cli_runner.go:164] Run: docker volume create ha-898615 --label name.minikube.sigs.k8s.io=ha-898615 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:27:41.190645  194626 oci.go:103] Successfully created a docker volume ha-898615
	I1009 19:27:41.190737  194626 cli_runner.go:164] Run: docker run --rm --name ha-898615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --entrypoint /usr/bin/test -v ha-898615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:27:41.600938  194626 oci.go:107] Successfully prepared a docker volume ha-898615
	I1009 19:27:41.600967  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.600993  194626 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:27:41.601055  194626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:27:46.106521  194626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.505414435s)
	I1009 19:27:46.106562  194626 kic.go:203] duration metric: took 4.50556336s to extract preloaded images to volume ...
	W1009 19:27:46.106658  194626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 19:27:46.106697  194626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 19:27:46.106744  194626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:27:46.168307  194626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-898615 --name ha-898615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-898615 --network ha-898615 --ip 192.168.49.2 --volume ha-898615:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:27:46.438758  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Running}}
	I1009 19:27:46.461401  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.481444  194626 cli_runner.go:164] Run: docker exec ha-898615 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:27:46.527865  194626 oci.go:144] the created container "ha-898615" has a running status.
	I1009 19:27:46.527897  194626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa...
	I1009 19:27:46.644201  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:27:46.644249  194626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:27:46.674019  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.696195  194626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:27:46.696225  194626 kic_runner.go:114] Args: [docker exec --privileged ha-898615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:27:46.751886  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.774821  194626 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:46.774929  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.795147  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.795497  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.795517  194626 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:46.948463  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:46.948514  194626 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:27:46.948583  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.967551  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.967801  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.967821  194626 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:27:47.127119  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:47.127192  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.145629  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.145957  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.145990  194626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:47.294310  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:47.294342  194626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:27:47.294368  194626 ubuntu.go:190] setting up certificates
	I1009 19:27:47.294401  194626 provision.go:84] configureAuth start
	I1009 19:27:47.294454  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:47.312608  194626 provision.go:143] copyHostCerts
	I1009 19:27:47.312651  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312684  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:27:47.312731  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312817  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:27:47.312923  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.312952  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:27:47.312962  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.313014  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:27:47.313086  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313114  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:27:47.313124  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313163  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:27:47.313236  194626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:27:47.539620  194626 provision.go:177] copyRemoteCerts
	I1009 19:27:47.539697  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:47.539740  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.557819  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:47.662665  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:47.662781  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:47.683372  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:47.683457  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:47.701669  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:47.701736  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:47.720164  194626 provision.go:87] duration metric: took 425.744193ms to configureAuth
	I1009 19:27:47.720200  194626 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:47.720451  194626 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:47.720564  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.738437  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.738688  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.738714  194626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:48.000107  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:48.000134  194626 machine.go:96] duration metric: took 1.225290325s to provisionDockerMachine
	I1009 19:27:48.000148  194626 client.go:171] duration metric: took 6.962444548s to LocalClient.Create
	I1009 19:27:48.000174  194626 start.go:168] duration metric: took 6.962517267s to libmachine.API.Create "ha-898615"
	I1009 19:27:48.000183  194626 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:27:48.000199  194626 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:48.000266  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:48.000306  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.017815  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.123997  194626 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:48.127754  194626 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:48.127793  194626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:48.127806  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:27:48.127864  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:27:48.127968  194626 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:27:48.127983  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:27:48.128079  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:48.136280  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:48.157989  194626 start.go:297] duration metric: took 157.790225ms for postStartSetup
	I1009 19:27:48.158323  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.176302  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:48.176728  194626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:48.176785  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.195218  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.296952  194626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.301724  194626 start.go:129] duration metric: took 7.26650252s to createHost
	I1009 19:27:48.301754  194626 start.go:84] releasing machines lock for "ha-898615", held for 7.266653034s
	I1009 19:27:48.301837  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.319813  194626 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:48.319860  194626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.319866  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.319919  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.339000  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.339730  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.494727  194626 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:48.501535  194626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.537367  194626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.542466  194626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.542536  194626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.568823  194626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:27:48.568848  194626 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.568888  194626 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:27:48.568936  194626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.585765  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.598310  194626 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.598367  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.615925  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.634773  194626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:48.716369  194626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:48.805777  194626 docker.go:234] disabling docker service ...
	I1009 19:27:48.805849  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:48.826709  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:48.840263  194626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:48.923854  194626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.007492  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.020704  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.035117  194626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.035178  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.045691  194626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:27:49.045759  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.055054  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.064210  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.073237  194626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:49.081510  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.090399  194626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.104388  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.113513  194626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:49.121175  194626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:49.128977  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.201984  194626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:49.310640  194626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:49.310716  194626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:49.315028  194626 start.go:564] Will wait 60s for crictl version
	I1009 19:27:49.315098  194626 ssh_runner.go:195] Run: which crictl
	I1009 19:27:49.319134  194626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:49.344131  194626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:49.344210  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.374167  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.406880  194626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:49.408191  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:49.425730  194626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:49.430174  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.441269  194626 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:49.441374  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:49.441444  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.474537  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.474563  194626 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:49.474626  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.501408  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.501432  194626 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:49.501440  194626 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:49.501565  194626 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:49.501644  194626 ssh_runner.go:195] Run: crio config
	I1009 19:27:49.548225  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:49.548247  194626 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:27:49.548270  194626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:49.548295  194626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:49.548487  194626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:49.548516  194626 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:49.548561  194626 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:49.560569  194626 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:49.560666  194626 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:49.560713  194626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:49.568926  194626 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:49.569017  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:49.577516  194626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:49.590951  194626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:49.606324  194626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:27:49.618986  194626 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 19:27:49.633526  194626 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:49.637412  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.647415  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.727039  194626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:49.749827  194626 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:27:49.749848  194626 certs.go:195] generating shared ca certs ...
	I1009 19:27:49.749916  194626 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.750063  194626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:27:49.750105  194626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:27:49.750114  194626 certs.go:257] generating profile certs ...
	I1009 19:27:49.750166  194626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:27:49.750191  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt with IP's: []
	I1009 19:27:49.922286  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt ...
	I1009 19:27:49.922322  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt: {Name:mke5f0b5d846145d5885091c3fdef11a03b4705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923243  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key ...
	I1009 19:27:49.923266  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key: {Name:mkb5fe559278888df39f6f81eb44f11c5b40eebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923364  194626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38
	I1009 19:27:49.923404  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 19:27:50.174751  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 ...
	I1009 19:27:50.174785  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38: {Name:mk7158926fc69073b0db0c318bd9373ec8743788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.174962  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 ...
	I1009 19:27:50.174976  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38: {Name:mk3dc2ee28c7f607abddd15bf98fe46423492612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.175056  194626 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:27:50.175135  194626 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:27:50.175189  194626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:27:50.175204  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt with IP's: []
	I1009 19:27:50.225715  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt ...
	I1009 19:27:50.225750  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt: {Name:mk61103353c5c3a0cbf54b18a910b0c83f19ee7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.225929  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key ...
	I1009 19:27:50.225941  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key: {Name:mkbe3072f67f16772d02b0025253c832dee4c437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.226013  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.226031  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.226044  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.226054  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.226067  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.226077  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.226088  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.226099  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.226151  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:27:50.226183  194626 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.226195  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:27:50.226219  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.226240  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.226260  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:27:50.226298  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:50.226322  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.226336  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.226348  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.226853  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:50.245895  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:50.263570  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:50.281671  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:27:50.300223  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:27:50.317734  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:27:50.335066  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:50.352704  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:27:50.370437  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:27:50.389421  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:50.406983  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:27:50.425614  194626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:50.439040  194626 ssh_runner.go:195] Run: openssl version
	I1009 19:27:50.445304  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:50.454013  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457873  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457929  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.491581  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:50.500958  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:27:50.509597  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513479  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513541  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.548203  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:50.557261  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:27:50.565908  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570090  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570139  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.604463  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:50.613756  194626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:50.617456  194626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:27:50.617519  194626 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:50.617611  194626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:50.617699  194626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:50.651007  194626 cri.go:89] found id: ""
	I1009 19:27:50.651094  194626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:50.660625  194626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:27:50.669536  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:27:50.669585  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:27:50.678565  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:27:50.678583  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:27:50.678632  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:27:50.686673  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:27:50.686739  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:27:50.694481  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:27:50.702511  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:27:50.702571  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:27:50.711335  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.719367  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:27:50.719437  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.727079  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:27:50.735773  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:27:50.735828  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:27:50.743402  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:27:50.783476  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:27:50.783553  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:27:50.804988  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:27:50.805072  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:27:50.805120  194626 kubeadm.go:318] OS: Linux
	I1009 19:27:50.805170  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:27:50.805244  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:27:50.805294  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:27:50.805368  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:27:50.805464  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:27:50.805570  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:27:50.805644  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:27:50.805709  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:27:50.865026  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:27:50.865138  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:27:50.865228  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:27:50.872900  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:27:50.874725  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:27:50.874828  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:27:50.874946  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:27:51.069934  194626 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:27:51.167065  194626 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:27:51.280550  194626 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:27:51.481119  194626 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:27:51.788062  194626 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:27:51.788230  194626 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:51.872181  194626 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:27:51.872362  194626 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:52.084923  194626 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:27:52.283000  194626 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:27:52.633617  194626 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:27:52.633683  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:27:52.795142  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:27:52.946617  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:27:53.060032  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:27:53.125689  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:27:53.322626  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:27:53.323225  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:27:53.325508  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:27:53.328645  194626 out.go:252]   - Booting up control plane ...
	I1009 19:27:53.328749  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:27:53.328835  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:27:53.328914  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:27:53.343297  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:27:53.343474  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:27:53.350427  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:27:53.350584  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:27:53.350658  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:27:53.451355  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:27:53.451572  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:27:54.452237  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000953428s
	I1009 19:27:54.456489  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:27:54.456634  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:27:54.456766  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:27:54.456840  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:31:54.457899  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	I1009 19:31:54.458060  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	I1009 19:31:54.458160  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	I1009 19:31:54.458176  194626 kubeadm.go:318] 
	I1009 19:31:54.458302  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:31:54.458440  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:31:54.458603  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:31:54.458813  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:31:54.458922  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:31:54.459044  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:31:54.459063  194626 kubeadm.go:318] 
	I1009 19:31:54.462630  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:54.462803  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:31:54.463587  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1009 19:31:54.463708  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:31:54.463871  194626 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000953428s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:31:54.463985  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:31:57.247677  194626 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.783657743s)
	I1009 19:31:57.247781  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:31:57.261935  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:31:57.261998  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:31:57.270455  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:31:57.270478  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:31:57.270524  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:31:57.278767  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:31:57.278835  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:31:57.286752  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:31:57.295053  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:31:57.295113  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:31:57.303048  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.311557  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:31:57.311631  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.319853  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:31:57.328199  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:31:57.328265  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:31:57.336006  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:31:57.396540  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:57.457937  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:36:00.178531  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:36:00.178718  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:36:00.182333  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:36:00.182408  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:36:00.182502  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:36:00.182552  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:36:00.182582  194626 kubeadm.go:318] OS: Linux
	I1009 19:36:00.182645  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:36:00.182724  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:36:00.182769  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:36:00.182812  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:36:00.182853  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:36:00.182902  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:36:00.182967  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:36:00.183022  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:36:00.183086  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:36:00.183241  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:36:00.183374  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:36:00.183474  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:36:00.185572  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:36:00.185640  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:36:00.185695  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:36:00.185782  194626 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:36:00.185857  194626 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:36:00.185922  194626 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:36:00.185977  194626 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:36:00.186037  194626 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:36:00.186100  194626 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:36:00.186172  194626 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:36:00.186259  194626 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:36:00.186320  194626 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:36:00.186413  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:36:00.186457  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:36:00.186503  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:36:00.186567  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:36:00.186661  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:36:00.186746  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:36:00.186865  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:36:00.186965  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:36:00.188328  194626 out.go:252]   - Booting up control plane ...
	I1009 19:36:00.188439  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:36:00.188511  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:36:00.188565  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:36:00.188647  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:36:00.188734  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:36:00.188835  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:36:00.188946  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:36:00.188994  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:36:00.189107  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:36:00.189207  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:36:00.189257  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.196157ms
	I1009 19:36:00.189351  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:36:00.189534  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:36:00.189618  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:36:00.189686  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:36:00.189753  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	I1009 19:36:00.189820  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	I1009 19:36:00.189897  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	I1009 19:36:00.189910  194626 kubeadm.go:318] 
	I1009 19:36:00.190035  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:36:00.190115  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:36:00.190192  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:36:00.190268  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:36:00.190333  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:36:00.190440  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:36:00.190476  194626 kubeadm.go:318] 
	I1009 19:36:00.190536  194626 kubeadm.go:402] duration metric: took 8m9.573023353s to StartCluster
	I1009 19:36:00.190620  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:36:00.190688  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:36:00.221163  194626 cri.go:89] found id: ""
	I1009 19:36:00.221200  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.221211  194626 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:36:00.221218  194626 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:36:00.221282  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:36:00.248439  194626 cri.go:89] found id: ""
	I1009 19:36:00.248473  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.248486  194626 logs.go:284] No container was found matching "etcd"
	I1009 19:36:00.248498  194626 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:36:00.248564  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:36:00.275751  194626 cri.go:89] found id: ""
	I1009 19:36:00.275781  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.275792  194626 logs.go:284] No container was found matching "coredns"
	I1009 19:36:00.275801  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:36:00.275868  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:36:00.304183  194626 cri.go:89] found id: ""
	I1009 19:36:00.304218  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.304227  194626 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:36:00.304233  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:36:00.304286  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:36:00.333120  194626 cri.go:89] found id: ""
	I1009 19:36:00.333154  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.333164  194626 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:36:00.333171  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:36:00.333221  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:36:00.362504  194626 cri.go:89] found id: ""
	I1009 19:36:00.362527  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.362536  194626 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:36:00.362542  194626 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:36:00.362602  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:36:00.391912  194626 cri.go:89] found id: ""
	I1009 19:36:00.391939  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.391949  194626 logs.go:284] No container was found matching "kindnet"
	I1009 19:36:00.391964  194626 logs.go:123] Gathering logs for kubelet ...
	I1009 19:36:00.391982  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:36:00.460680  194626 logs.go:123] Gathering logs for dmesg ...
	I1009 19:36:00.460722  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:36:00.473600  194626 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:36:00.473634  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:36:00.538515  194626 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:36:00.538542  194626 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:36:00.538556  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:36:00.601750  194626 logs.go:123] Gathering logs for container status ...
	I1009 19:36:00.601793  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:36:00.631755  194626 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:36:00.631818  194626 out.go:285] * 
	W1009 19:36:00.631894  194626 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.631914  194626 out.go:285] * 
	W1009 19:36:00.633671  194626 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:36:00.637455  194626 out.go:203] 
	W1009 19:36:00.638829  194626 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.638866  194626 out.go:285] * 
	I1009 19:36:00.640615  194626 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.032876297Z" level=info msg="createCtr: removing container 70ffc7f39282e890f5d610a6d55742a39f611c4b72b1abc5e57fda6647072a96" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.032910812Z" level=info msg="createCtr: deleting container 70ffc7f39282e890f5d610a6d55742a39f611c4b72b1abc5e57fda6647072a96 from storage" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.03537319Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.001285127Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=c02e9596-6e86-4723-9fa0-0c1a59b8ec11 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.002863313Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=7ee67851-e858-47a9-b6e5-3f32f6c3c4d6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.003889028Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-898615/kube-scheduler" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.004176819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.00769631Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.008275988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.030838786Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.032303179Z" level=info msg="createCtr: deleting container ID de42baf3157891d1e94f492d869745ae3dbace262ec6d2110858aae19b9d4966 from idIndex" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.032342772Z" level=info msg="createCtr: removing container de42baf3157891d1e94f492d869745ae3dbace262ec6d2110858aae19b9d4966" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.032389733Z" level=info msg="createCtr: deleting container de42baf3157891d1e94f492d869745ae3dbace262ec6d2110858aae19b9d4966 from storage" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.034633841Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-898615_kube-system_bc0e16a1814c5389485436acdfc968ed_0" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.000987566Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5e0f4023-c048-4d45-8521-b7546432f5a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.002003447Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=ebe042e5-2717-438c-a7dc-ab7e11cf19ab name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.003011113Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-898615/kube-controller-manager" id=6811f773-690c-437e-9267-8e4e5d88e7a9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.0032297Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.006977689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.007515147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.022761946Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6811f773-690c-437e-9267-8e4e5d88e7a9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.024311955Z" level=info msg="createCtr: deleting container ID f95e77dad1d5ecf1277f5730f7806b2084810f5c607a20e95175b62d5261f069 from idIndex" id=6811f773-690c-437e-9267-8e4e5d88e7a9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.024350407Z" level=info msg="createCtr: removing container f95e77dad1d5ecf1277f5730f7806b2084810f5c607a20e95175b62d5261f069" id=6811f773-690c-437e-9267-8e4e5d88e7a9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.024401956Z" level=info msg="createCtr: deleting container f95e77dad1d5ecf1277f5730f7806b2084810f5c607a20e95175b62d5261f069 from storage" id=6811f773-690c-437e-9267-8e4e5d88e7a9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.027036682Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=6811f773-690c-437e-9267-8e4e5d88e7a9 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:37:42.811518    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:42.812107    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:42.813855    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:42.814435    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:42.816078    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:37:42 up  1:20,  0 user,  load average: 0.02, 0.04, 1.70
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:37:37 ha-898615 kubelet[1937]:         container etcd start failed in pod etcd-ha-898615_kube-system(2d43b86ac03e93e11f35c923e2560103): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:37 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:37 ha-898615 kubelet[1937]: E1009 19:37:37.035919    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-898615" podUID="2d43b86ac03e93e11f35c923e2560103"
	Oct 09 19:37:39 ha-898615 kubelet[1937]: E1009 19:37:39.717957    1937 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-898615.186ce986e63acb66  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-898615,UID:ha-898615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-898615 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-898615,},FirstTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,LastTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-898615,}"
	Oct 09 19:37:40 ha-898615 kubelet[1937]: E1009 19:37:40.000749    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:40 ha-898615 kubelet[1937]: E1009 19:37:40.021145    1937 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	Oct 09 19:37:40 ha-898615 kubelet[1937]: E1009 19:37:40.034968    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:40 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:40 ha-898615 kubelet[1937]:  > podSandboxID="d9cf0054a77eb17087a85fb70ade0aa16f7510c69fabd94329449a3f5ee8df1b"
	Oct 09 19:37:40 ha-898615 kubelet[1937]: E1009 19:37:40.035092    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:40 ha-898615 kubelet[1937]:         container kube-scheduler start failed in pod kube-scheduler-ha-898615_kube-system(bc0e16a1814c5389485436acdfc968ed): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:40 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:40 ha-898615 kubelet[1937]: E1009 19:37:40.035140    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-898615" podUID="bc0e16a1814c5389485436acdfc968ed"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.000504    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.027354    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:41 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:41 ha-898615 kubelet[1937]:  > podSandboxID="da9ccac0760165455089feb0a833fa92808edbd25f27148d0daf896a8d20b03b"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.027526    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:41 ha-898615 kubelet[1937]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:41 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.027599    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.645109    1937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: I1009 19:37:41.817276    1937 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.817707    1937 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.954789    1937 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 6 (312.677173ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:43.212571  204656 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (1.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-898615" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-898615\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-898615\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-898615\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:27:46.220087387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39eaa0a5ee22c7c1e0e0329f3f944afa2ddef6cd571ebd9f0aa050805b81a54f",
	            "SandboxKey": "/var/run/docker/netns/39eaa0a5ee22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:31:dc:da:08:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "7930ac5db0626d63671f2a77753202141442de70603e1b4acf6213e0bd34944d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 6 (308.552107ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:43.864585  204909 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-158523 image build -t localhost/my-image:functional-158523 testdata/build --alsologtostderr          │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls --format table --alsologtostderr                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ delete  │ -p functional-158523                                                                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │ 09 Oct 25 19:27 UTC │
	│ start   │ ha-898615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- rollout status deployment/busybox                                                          │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node add --alsologtostderr -v 5                                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node stop m02 --alsologtostderr -v 5                                                                  │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:27:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:27:40.840216  194626 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:40.840464  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840473  194626 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:40.840478  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840669  194626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:27:40.841171  194626 out.go:368] Setting JSON to false
	I1009 19:27:40.842061  194626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4210,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:27:40.842167  194626 start.go:143] virtualization: kvm guest
	I1009 19:27:40.844410  194626 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:27:40.845759  194626 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:40.845801  194626 notify.go:221] Checking for updates...
	I1009 19:27:40.848757  194626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:40.850175  194626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:27:40.851491  194626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:27:40.852705  194626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:27:40.853892  194626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:40.855321  194626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:40.878925  194626 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:27:40.879089  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:40.938194  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.927865936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:40.938299  194626 docker.go:319] overlay module found
	I1009 19:27:40.940236  194626 out.go:179] * Using the docker driver based on user configuration
	I1009 19:27:40.941822  194626 start.go:309] selected driver: docker
	I1009 19:27:40.941843  194626 start.go:930] validating driver "docker" against <nil>
	I1009 19:27:40.941856  194626 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:40.942433  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:41.005454  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.995221815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:41.005685  194626 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 19:27:41.005966  194626 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:41.007942  194626 out.go:179] * Using Docker driver with root privileges
	I1009 19:27:41.009445  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:41.009493  194626 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:27:41.009501  194626 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:27:41.009585  194626 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 19:27:41.010949  194626 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:27:41.012190  194626 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:41.013322  194626 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:41.014388  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.014435  194626 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:27:41.014445  194626 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:41.014436  194626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:41.014539  194626 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:27:41.014554  194626 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:41.014900  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:41.014929  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json: {Name:mk248a27bef73ecdf1bd71857a83cb8ef52e81ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:41.034874  194626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:41.034900  194626 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:41.034920  194626 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:41.034961  194626 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:41.035085  194626 start.go:365] duration metric: took 102.16µs to acquireMachinesLock for "ha-898615"
	I1009 19:27:41.035119  194626 start.go:94] Provisioning new machine with config: &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:41.035201  194626 start.go:126] createHost starting for "" (driver="docker")
	I1009 19:27:41.037427  194626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:27:41.037659  194626 start.go:160] libmachine.API.Create for "ha-898615" (driver="docker")
	I1009 19:27:41.037695  194626 client.go:168] LocalClient.Create starting
	I1009 19:27:41.037764  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
	I1009 19:27:41.037804  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037821  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.037887  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
	I1009 19:27:41.037913  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037927  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.038348  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:27:41.055472  194626 cli_runner.go:211] docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:27:41.055548  194626 network_create.go:284] running [docker network inspect ha-898615] to gather additional debugging logs...
	I1009 19:27:41.055569  194626 cli_runner.go:164] Run: docker network inspect ha-898615
	W1009 19:27:41.074024  194626 cli_runner.go:211] docker network inspect ha-898615 returned with exit code 1
	I1009 19:27:41.074056  194626 network_create.go:287] error running [docker network inspect ha-898615]: docker network inspect ha-898615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-898615 not found
	I1009 19:27:41.074071  194626 network_create.go:289] output of [docker network inspect ha-898615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-898615 not found
	
	** /stderr **
	I1009 19:27:41.074158  194626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:41.091849  194626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a266e0}
	I1009 19:27:41.091899  194626 network_create.go:124] attempt to create docker network ha-898615 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 19:27:41.091955  194626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-898615 ha-898615
	I1009 19:27:41.153434  194626 network_create.go:108] docker network ha-898615 192.168.49.0/24 created
	I1009 19:27:41.153469  194626 kic.go:121] calculated static IP "192.168.49.2" for the "ha-898615" container
	I1009 19:27:41.153533  194626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:27:41.170560  194626 cli_runner.go:164] Run: docker volume create ha-898615 --label name.minikube.sigs.k8s.io=ha-898615 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:27:41.190645  194626 oci.go:103] Successfully created a docker volume ha-898615
	I1009 19:27:41.190737  194626 cli_runner.go:164] Run: docker run --rm --name ha-898615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --entrypoint /usr/bin/test -v ha-898615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:27:41.600938  194626 oci.go:107] Successfully prepared a docker volume ha-898615
	I1009 19:27:41.600967  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.600993  194626 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:27:41.601055  194626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:27:46.106521  194626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.505414435s)
	I1009 19:27:46.106562  194626 kic.go:203] duration metric: took 4.50556336s to extract preloaded images to volume ...
	W1009 19:27:46.106658  194626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 19:27:46.106697  194626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 19:27:46.106744  194626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:27:46.168307  194626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-898615 --name ha-898615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-898615 --network ha-898615 --ip 192.168.49.2 --volume ha-898615:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:27:46.438758  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Running}}
	I1009 19:27:46.461401  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.481444  194626 cli_runner.go:164] Run: docker exec ha-898615 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:27:46.527865  194626 oci.go:144] the created container "ha-898615" has a running status.
	I1009 19:27:46.527897  194626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa...
	I1009 19:27:46.644201  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:27:46.644249  194626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:27:46.674019  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.696195  194626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:27:46.696225  194626 kic_runner.go:114] Args: [docker exec --privileged ha-898615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:27:46.751886  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.774821  194626 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:46.774929  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.795147  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.795497  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.795517  194626 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:46.948463  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:46.948514  194626 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:27:46.948583  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.967551  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.967801  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.967821  194626 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:27:47.127119  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:47.127192  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.145629  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.145957  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.145990  194626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:47.294310  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:47.294342  194626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:27:47.294368  194626 ubuntu.go:190] setting up certificates
	I1009 19:27:47.294401  194626 provision.go:84] configureAuth start
	I1009 19:27:47.294454  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:47.312608  194626 provision.go:143] copyHostCerts
	I1009 19:27:47.312651  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312684  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:27:47.312731  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312817  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:27:47.312923  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.312952  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:27:47.312962  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.313014  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:27:47.313086  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313114  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:27:47.313124  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313163  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:27:47.313236  194626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:27:47.539620  194626 provision.go:177] copyRemoteCerts
	I1009 19:27:47.539697  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:47.539740  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.557819  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:47.662665  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:47.662781  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:47.683372  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:47.683457  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:47.701669  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:47.701736  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:47.720164  194626 provision.go:87] duration metric: took 425.744193ms to configureAuth
	I1009 19:27:47.720200  194626 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:47.720451  194626 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:47.720564  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.738437  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.738688  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.738714  194626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:48.000107  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:48.000134  194626 machine.go:96] duration metric: took 1.225290325s to provisionDockerMachine
	I1009 19:27:48.000148  194626 client.go:171] duration metric: took 6.962444548s to LocalClient.Create
	I1009 19:27:48.000174  194626 start.go:168] duration metric: took 6.962517267s to libmachine.API.Create "ha-898615"
	I1009 19:27:48.000183  194626 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:27:48.000199  194626 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:48.000266  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:48.000306  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.017815  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.123997  194626 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:48.127754  194626 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:48.127793  194626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:48.127806  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:27:48.127864  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:27:48.127968  194626 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:27:48.127983  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:27:48.128079  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:48.136280  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:48.157989  194626 start.go:297] duration metric: took 157.790225ms for postStartSetup
	I1009 19:27:48.158323  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.176302  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:48.176728  194626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:48.176785  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.195218  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.296952  194626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.301724  194626 start.go:129] duration metric: took 7.26650252s to createHost
	I1009 19:27:48.301754  194626 start.go:84] releasing machines lock for "ha-898615", held for 7.266653034s
	I1009 19:27:48.301837  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.319813  194626 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:48.319860  194626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.319866  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.319919  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.339000  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.339730  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.494727  194626 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:48.501535  194626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.537367  194626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.542466  194626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.542536  194626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.568823  194626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:27:48.568848  194626 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.568888  194626 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:27:48.568936  194626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.585765  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.598310  194626 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.598367  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.615925  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.634773  194626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:48.716369  194626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:48.805777  194626 docker.go:234] disabling docker service ...
	I1009 19:27:48.805849  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:48.826709  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:48.840263  194626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:48.923854  194626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.007492  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.020704  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.035117  194626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.035178  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.045691  194626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:27:49.045759  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.055054  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.064210  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.073237  194626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:49.081510  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.090399  194626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.104388  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.113513  194626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:49.121175  194626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:49.128977  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.201984  194626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:49.310640  194626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:49.310716  194626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:49.315028  194626 start.go:564] Will wait 60s for crictl version
	I1009 19:27:49.315098  194626 ssh_runner.go:195] Run: which crictl
	I1009 19:27:49.319134  194626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:49.344131  194626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:49.344210  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.374167  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.406880  194626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:49.408191  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:49.425730  194626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:49.430174  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.441269  194626 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:49.441374  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:49.441444  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.474537  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.474563  194626 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:49.474626  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.501408  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.501432  194626 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:49.501440  194626 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:49.501565  194626 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:49.501644  194626 ssh_runner.go:195] Run: crio config
	I1009 19:27:49.548225  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:49.548247  194626 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:27:49.548270  194626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:49.548295  194626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:49.548487  194626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:49.548516  194626 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:49.548561  194626 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:49.560569  194626 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:49.560666  194626 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:49.560713  194626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:49.568926  194626 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:49.569017  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:49.577516  194626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:49.590951  194626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:49.606324  194626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:27:49.618986  194626 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 19:27:49.633526  194626 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:49.637412  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.647415  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.727039  194626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:49.749827  194626 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:27:49.749848  194626 certs.go:195] generating shared ca certs ...
	I1009 19:27:49.749916  194626 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.750063  194626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:27:49.750105  194626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:27:49.750114  194626 certs.go:257] generating profile certs ...
	I1009 19:27:49.750166  194626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:27:49.750191  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt with IP's: []
	I1009 19:27:49.922286  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt ...
	I1009 19:27:49.922322  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt: {Name:mke5f0b5d846145d5885091c3fdef11a03b4705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923243  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key ...
	I1009 19:27:49.923266  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key: {Name:mkb5fe559278888df39f6f81eb44f11c5b40eebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923364  194626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38
	I1009 19:27:49.923404  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 19:27:50.174751  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 ...
	I1009 19:27:50.174785  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38: {Name:mk7158926fc69073b0db0c318bd9373ec8743788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.174962  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 ...
	I1009 19:27:50.174976  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38: {Name:mk3dc2ee28c7f607abddd15bf98fe46423492612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.175056  194626 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:27:50.175135  194626 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:27:50.175189  194626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:27:50.175204  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt with IP's: []
	I1009 19:27:50.225715  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt ...
	I1009 19:27:50.225750  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt: {Name:mk61103353c5c3a0cbf54b18a910b0c83f19ee7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.225929  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key ...
	I1009 19:27:50.225941  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key: {Name:mkbe3072f67f16772d02b0025253c832dee4c437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.226013  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.226031  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.226044  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.226054  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.226067  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.226077  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.226088  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.226099  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.226151  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:27:50.226183  194626 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.226195  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:27:50.226219  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.226240  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.226260  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:27:50.226298  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:50.226322  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.226336  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.226348  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.226853  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:50.245895  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:50.263570  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:50.281671  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:27:50.300223  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:27:50.317734  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:27:50.335066  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:50.352704  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:27:50.370437  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:27:50.389421  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:50.406983  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:27:50.425614  194626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:50.439040  194626 ssh_runner.go:195] Run: openssl version
	I1009 19:27:50.445304  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:50.454013  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457873  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457929  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.491581  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:50.500958  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:27:50.509597  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513479  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513541  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.548203  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:50.557261  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:27:50.565908  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570090  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570139  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.604463  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:50.613756  194626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:50.617456  194626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:27:50.617519  194626 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:50.617611  194626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:50.617699  194626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:50.651007  194626 cri.go:89] found id: ""
	I1009 19:27:50.651094  194626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:50.660625  194626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:27:50.669536  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:27:50.669585  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:27:50.678565  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:27:50.678583  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:27:50.678632  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:27:50.686673  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:27:50.686739  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:27:50.694481  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:27:50.702511  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:27:50.702571  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:27:50.711335  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.719367  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:27:50.719437  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.727079  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:27:50.735773  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:27:50.735828  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:27:50.743402  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:27:50.783476  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:27:50.783553  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:27:50.804988  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:27:50.805072  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:27:50.805120  194626 kubeadm.go:318] OS: Linux
	I1009 19:27:50.805170  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:27:50.805244  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:27:50.805294  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:27:50.805368  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:27:50.805464  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:27:50.805570  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:27:50.805644  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:27:50.805709  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:27:50.865026  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:27:50.865138  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:27:50.865228  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:27:50.872900  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:27:50.874725  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:27:50.874828  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:27:50.874946  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:27:51.069934  194626 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:27:51.167065  194626 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:27:51.280550  194626 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:27:51.481119  194626 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:27:51.788062  194626 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:27:51.788230  194626 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:51.872181  194626 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:27:51.872362  194626 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:52.084923  194626 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:27:52.283000  194626 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:27:52.633617  194626 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:27:52.633683  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:27:52.795142  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:27:52.946617  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:27:53.060032  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:27:53.125689  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:27:53.322626  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:27:53.323225  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:27:53.325508  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:27:53.328645  194626 out.go:252]   - Booting up control plane ...
	I1009 19:27:53.328749  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:27:53.328835  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:27:53.328914  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:27:53.343297  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:27:53.343474  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:27:53.350427  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:27:53.350584  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:27:53.350658  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:27:53.451355  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:27:53.451572  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:27:54.452237  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000953428s
	I1009 19:27:54.456489  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:27:54.456634  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:27:54.456766  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:27:54.456840  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:31:54.457899  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	I1009 19:31:54.458060  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	I1009 19:31:54.458160  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	I1009 19:31:54.458176  194626 kubeadm.go:318] 
	I1009 19:31:54.458302  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:31:54.458440  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:31:54.458603  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:31:54.458813  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:31:54.458922  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:31:54.459044  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:31:54.459063  194626 kubeadm.go:318] 
	I1009 19:31:54.462630  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:54.462803  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:31:54.463587  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1009 19:31:54.463708  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:31:54.463871  194626 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000953428s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:31:54.463985  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:31:57.247677  194626 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.783657743s)
	I1009 19:31:57.247781  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:31:57.261935  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:31:57.261998  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:31:57.270455  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:31:57.270478  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:31:57.270524  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:31:57.278767  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:31:57.278835  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:31:57.286752  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:31:57.295053  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:31:57.295113  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:31:57.303048  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.311557  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:31:57.311631  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.319853  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:31:57.328199  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:31:57.328265  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:31:57.336006  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:31:57.396540  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:57.457937  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:36:00.178531  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:36:00.178718  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:36:00.182333  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:36:00.182408  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:36:00.182502  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:36:00.182552  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:36:00.182582  194626 kubeadm.go:318] OS: Linux
	I1009 19:36:00.182645  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:36:00.182724  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:36:00.182769  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:36:00.182812  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:36:00.182853  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:36:00.182902  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:36:00.182967  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:36:00.183022  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:36:00.183086  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:36:00.183241  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:36:00.183374  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:36:00.183474  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:36:00.185572  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:36:00.185640  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:36:00.185695  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:36:00.185782  194626 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:36:00.185857  194626 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:36:00.185922  194626 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:36:00.185977  194626 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:36:00.186037  194626 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:36:00.186100  194626 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:36:00.186172  194626 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:36:00.186259  194626 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:36:00.186320  194626 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:36:00.186413  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:36:00.186457  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:36:00.186503  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:36:00.186567  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:36:00.186661  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:36:00.186746  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:36:00.186865  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:36:00.186965  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:36:00.188328  194626 out.go:252]   - Booting up control plane ...
	I1009 19:36:00.188439  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:36:00.188511  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:36:00.188565  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:36:00.188647  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:36:00.188734  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:36:00.188835  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:36:00.188946  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:36:00.188994  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:36:00.189107  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:36:00.189207  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:36:00.189257  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.196157ms
	I1009 19:36:00.189351  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:36:00.189534  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:36:00.189618  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:36:00.189686  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:36:00.189753  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	I1009 19:36:00.189820  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	I1009 19:36:00.189897  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	I1009 19:36:00.189910  194626 kubeadm.go:318] 
	I1009 19:36:00.190035  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:36:00.190115  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:36:00.190192  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:36:00.190268  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:36:00.190333  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:36:00.190440  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:36:00.190476  194626 kubeadm.go:318] 
	I1009 19:36:00.190536  194626 kubeadm.go:402] duration metric: took 8m9.573023353s to StartCluster
	I1009 19:36:00.190620  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:36:00.190688  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:36:00.221163  194626 cri.go:89] found id: ""
	I1009 19:36:00.221200  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.221211  194626 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:36:00.221218  194626 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:36:00.221282  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:36:00.248439  194626 cri.go:89] found id: ""
	I1009 19:36:00.248473  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.248486  194626 logs.go:284] No container was found matching "etcd"
	I1009 19:36:00.248498  194626 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:36:00.248564  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:36:00.275751  194626 cri.go:89] found id: ""
	I1009 19:36:00.275781  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.275792  194626 logs.go:284] No container was found matching "coredns"
	I1009 19:36:00.275801  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:36:00.275868  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:36:00.304183  194626 cri.go:89] found id: ""
	I1009 19:36:00.304218  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.304227  194626 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:36:00.304233  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:36:00.304286  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:36:00.333120  194626 cri.go:89] found id: ""
	I1009 19:36:00.333154  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.333164  194626 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:36:00.333171  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:36:00.333221  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:36:00.362504  194626 cri.go:89] found id: ""
	I1009 19:36:00.362527  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.362536  194626 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:36:00.362542  194626 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:36:00.362602  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:36:00.391912  194626 cri.go:89] found id: ""
	I1009 19:36:00.391939  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.391949  194626 logs.go:284] No container was found matching "kindnet"
	I1009 19:36:00.391964  194626 logs.go:123] Gathering logs for kubelet ...
	I1009 19:36:00.391982  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:36:00.460680  194626 logs.go:123] Gathering logs for dmesg ...
	I1009 19:36:00.460722  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:36:00.473600  194626 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:36:00.473634  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:36:00.538515  194626 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:36:00.538542  194626 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:36:00.538556  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:36:00.601750  194626 logs.go:123] Gathering logs for container status ...
	I1009 19:36:00.601793  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:36:00.631755  194626 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:36:00.631818  194626 out.go:285] * 
	W1009 19:36:00.631894  194626 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.631914  194626 out.go:285] * 
	W1009 19:36:00.633671  194626 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:36:00.637455  194626 out.go:203] 
	W1009 19:36:00.638829  194626 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.638866  194626 out.go:285] * 
	I1009 19:36:00.640615  194626 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.032876297Z" level=info msg="createCtr: removing container 70ffc7f39282e890f5d610a6d55742a39f611c4b72b1abc5e57fda6647072a96" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.032910812Z" level=info msg="createCtr: deleting container 70ffc7f39282e890f5d610a6d55742a39f611c4b72b1abc5e57fda6647072a96 from storage" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:37 ha-898615 crio[777]: time="2025-10-09T19:37:37.03537319Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=259f5abd-bf82-440f-a6ae-2a42f91cbbd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.001285127Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=c02e9596-6e86-4723-9fa0-0c1a59b8ec11 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.002863313Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=7ee67851-e858-47a9-b6e5-3f32f6c3c4d6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.003889028Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-898615/kube-scheduler" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.004176819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.00769631Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.008275988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.030838786Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.032303179Z" level=info msg="createCtr: deleting container ID de42baf3157891d1e94f492d869745ae3dbace262ec6d2110858aae19b9d4966 from idIndex" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.032342772Z" level=info msg="createCtr: removing container de42baf3157891d1e94f492d869745ae3dbace262ec6d2110858aae19b9d4966" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.032389733Z" level=info msg="createCtr: deleting container de42baf3157891d1e94f492d869745ae3dbace262ec6d2110858aae19b9d4966 from storage" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:40 ha-898615 crio[777]: time="2025-10-09T19:37:40.034633841Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-898615_kube-system_bc0e16a1814c5389485436acdfc968ed_0" id=c755ab01-2e13-41e3-bf5d-99b5861fe9f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.000987566Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5e0f4023-c048-4d45-8521-b7546432f5a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.002003447Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=ebe042e5-2717-438c-a7dc-ab7e11cf19ab name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.003011113Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-898615/kube-controller-manager" id=6811f773-690c-437e-9267-8e4e5d88e7a9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.0032297Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.006977689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.007515147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.022761946Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6811f773-690c-437e-9267-8e4e5d88e7a9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.024311955Z" level=info msg="createCtr: deleting container ID f95e77dad1d5ecf1277f5730f7806b2084810f5c607a20e95175b62d5261f069 from idIndex" id=6811f773-690c-437e-9267-8e4e5d88e7a9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.024350407Z" level=info msg="createCtr: removing container f95e77dad1d5ecf1277f5730f7806b2084810f5c607a20e95175b62d5261f069" id=6811f773-690c-437e-9267-8e4e5d88e7a9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.024401956Z" level=info msg="createCtr: deleting container f95e77dad1d5ecf1277f5730f7806b2084810f5c607a20e95175b62d5261f069 from storage" id=6811f773-690c-437e-9267-8e4e5d88e7a9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:37:41 ha-898615 crio[777]: time="2025-10-09T19:37:41.027036682Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=6811f773-690c-437e-9267-8e4e5d88e7a9 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:37:44.473636    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:44.474208    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:44.475870    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:44.476340    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:37:44.477816    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:37:44 up  1:20,  0 user,  load average: 0.02, 0.04, 1.70
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:37:37 ha-898615 kubelet[1937]:         container etcd start failed in pod etcd-ha-898615_kube-system(2d43b86ac03e93e11f35c923e2560103): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:37 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:37 ha-898615 kubelet[1937]: E1009 19:37:37.035919    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-898615" podUID="2d43b86ac03e93e11f35c923e2560103"
	Oct 09 19:37:39 ha-898615 kubelet[1937]: E1009 19:37:39.717957    1937 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-898615.186ce986e63acb66  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-898615,UID:ha-898615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-898615 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-898615,},FirstTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,LastTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-898615,}"
	Oct 09 19:37:40 ha-898615 kubelet[1937]: E1009 19:37:40.000749    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:40 ha-898615 kubelet[1937]: E1009 19:37:40.021145    1937 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	Oct 09 19:37:40 ha-898615 kubelet[1937]: E1009 19:37:40.034968    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:40 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:40 ha-898615 kubelet[1937]:  > podSandboxID="d9cf0054a77eb17087a85fb70ade0aa16f7510c69fabd94329449a3f5ee8df1b"
	Oct 09 19:37:40 ha-898615 kubelet[1937]: E1009 19:37:40.035092    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:40 ha-898615 kubelet[1937]:         container kube-scheduler start failed in pod kube-scheduler-ha-898615_kube-system(bc0e16a1814c5389485436acdfc968ed): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:40 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:40 ha-898615 kubelet[1937]: E1009 19:37:40.035140    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-898615" podUID="bc0e16a1814c5389485436acdfc968ed"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.000504    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.027354    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:37:41 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:41 ha-898615 kubelet[1937]:  > podSandboxID="da9ccac0760165455089feb0a833fa92808edbd25f27148d0daf896a8d20b03b"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.027526    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:37:41 ha-898615 kubelet[1937]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:37:41 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.027599    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.645109    1937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: I1009 19:37:41.817276    1937 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.817707    1937 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:37:41 ha-898615 kubelet[1937]: E1009 19:37:41.954789    1937 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 6 (312.592791ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:37:44.870041  205259 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 node start m02 --alsologtostderr -v 5: exit status 85 (59.044983ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:37:44.930815  205376 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:37:44.931076  205376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:44.931085  205376 out.go:374] Setting ErrFile to fd 2...
	I1009 19:37:44.931089  205376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:44.931292  205376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:37:44.931594  205376 mustload.go:65] Loading cluster: ha-898615
	I1009 19:37:44.931935  205376 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:37:44.934444  205376 out.go:203] 
	W1009 19:37:44.935891  205376 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1009 19:37:44.935907  205376 out.go:285] * 
	* 
	W1009 19:37:44.939133  205376 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:37:44.940673  205376 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:424: I1009 19:37:44.930815  205376 out.go:360] Setting OutFile to fd 1 ...
I1009 19:37:44.931076  205376 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:37:44.931085  205376 out.go:374] Setting ErrFile to fd 2...
I1009 19:37:44.931089  205376 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:37:44.931292  205376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
I1009 19:37:44.931594  205376 mustload.go:65] Loading cluster: ha-898615
I1009 19:37:44.931935  205376 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:37:44.934444  205376 out.go:203] 
W1009 19:37:44.935891  205376 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1009 19:37:44.935907  205376 out.go:285] * 
* 
W1009 19:37:44.939133  205376 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1009 19:37:44.940673  205376 out.go:203] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-linux-amd64 -p ha-898615 node start m02 --alsologtostderr -v 5": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5: exit status 6 (308.92095ms)

                                                
                                                
-- stdout --
	ha-898615
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:37:44.994610  205387 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:37:44.994849  205387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:44.994858  205387 out.go:374] Setting ErrFile to fd 2...
	I1009 19:37:44.994862  205387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:44.995084  205387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:37:44.995251  205387 out.go:368] Setting JSON to false
	I1009 19:37:44.995276  205387 mustload.go:65] Loading cluster: ha-898615
	I1009 19:37:44.995452  205387 notify.go:221] Checking for updates...
	I1009 19:37:44.995688  205387 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:37:44.995711  205387 status.go:174] checking status of ha-898615 ...
	I1009 19:37:44.996221  205387 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:37:45.018652  205387 status.go:371] ha-898615 host status = "Running" (err=<nil>)
	I1009 19:37:45.018694  205387 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:37:45.018959  205387 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:37:45.038195  205387 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:37:45.038571  205387 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:37:45.038632  205387 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:37:45.056494  205387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:37:45.161087  205387 ssh_runner.go:195] Run: systemctl --version
	I1009 19:37:45.167814  205387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:37:45.180621  205387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:37:45.238708  205387 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:37:45.227483287 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 19:37:45.239294  205387 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:37:45.239336  205387 api_server.go:166] Checking apiserver status ...
	I1009 19:37:45.239401  205387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 19:37:45.249815  205387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:37:45.249843  205387 status.go:463] ha-898615 apiserver status = Running (err=<nil>)
	I1009 19:37:45.249857  205387 status.go:176] ha-898615 status: &{Name:ha-898615 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 19:37:45.255754  141519 retry.go:31] will retry after 938.578637ms: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5: exit status 6 (300.017752ms)

                                                
                                                
-- stdout --
	ha-898615
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:37:46.240262  205508 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:37:46.240565  205508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:46.240575  205508 out.go:374] Setting ErrFile to fd 2...
	I1009 19:37:46.240580  205508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:46.240769  205508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:37:46.240933  205508 out.go:368] Setting JSON to false
	I1009 19:37:46.240957  205508 mustload.go:65] Loading cluster: ha-898615
	I1009 19:37:46.241053  205508 notify.go:221] Checking for updates...
	I1009 19:37:46.241271  205508 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:37:46.241285  205508 status.go:174] checking status of ha-898615 ...
	I1009 19:37:46.241748  205508 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:37:46.263011  205508 status.go:371] ha-898615 host status = "Running" (err=<nil>)
	I1009 19:37:46.263044  205508 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:37:46.263400  205508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:37:46.281226  205508 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:37:46.281604  205508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:37:46.281672  205508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:37:46.299760  205508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:37:46.402281  205508 ssh_runner.go:195] Run: systemctl --version
	I1009 19:37:46.409100  205508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:37:46.422277  205508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:37:46.477268  205508 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:37:46.467110481 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 19:37:46.477775  205508 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:37:46.477819  205508 api_server.go:166] Checking apiserver status ...
	I1009 19:37:46.477873  205508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 19:37:46.489205  205508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:37:46.489225  205508 status.go:463] ha-898615 apiserver status = Running (err=<nil>)
	I1009 19:37:46.489236  205508 status.go:176] ha-898615 status: &{Name:ha-898615 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 19:37:46.495221  141519 retry.go:31] will retry after 1.490272595s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5: exit status 6 (305.679312ms)

                                                
                                                
-- stdout --
	ha-898615
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:37:48.032256  205624 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:37:48.032524  205624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:48.032533  205624 out.go:374] Setting ErrFile to fd 2...
	I1009 19:37:48.032538  205624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:48.032750  205624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:37:48.032924  205624 out.go:368] Setting JSON to false
	I1009 19:37:48.032948  205624 mustload.go:65] Loading cluster: ha-898615
	I1009 19:37:48.033059  205624 notify.go:221] Checking for updates...
	I1009 19:37:48.033297  205624 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:37:48.033311  205624 status.go:174] checking status of ha-898615 ...
	I1009 19:37:48.033722  205624 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:37:48.054654  205624 status.go:371] ha-898615 host status = "Running" (err=<nil>)
	I1009 19:37:48.054709  205624 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:37:48.055019  205624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:37:48.073632  205624 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:37:48.074010  205624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:37:48.074069  205624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:37:48.094892  205624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:37:48.195710  205624 ssh_runner.go:195] Run: systemctl --version
	I1009 19:37:48.202105  205624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:37:48.214134  205624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:37:48.274682  205624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:37:48.264341264 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 19:37:48.275162  205624 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:37:48.275197  205624 api_server.go:166] Checking apiserver status ...
	I1009 19:37:48.275246  205624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 19:37:48.286218  205624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:37:48.286242  205624 status.go:463] ha-898615 apiserver status = Running (err=<nil>)
	I1009 19:37:48.286256  205624 status.go:176] ha-898615 status: &{Name:ha-898615 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 19:37:48.292847  141519 retry.go:31] will retry after 2.061893535s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5: exit status 6 (302.490059ms)

                                                
                                                
-- stdout --
	ha-898615
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:37:50.401292  205762 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:37:50.401611  205762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:50.401623  205762 out.go:374] Setting ErrFile to fd 2...
	I1009 19:37:50.401627  205762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:50.401806  205762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:37:50.401991  205762 out.go:368] Setting JSON to false
	I1009 19:37:50.402018  205762 mustload.go:65] Loading cluster: ha-898615
	I1009 19:37:50.402061  205762 notify.go:221] Checking for updates...
	I1009 19:37:50.402370  205762 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:37:50.402402  205762 status.go:174] checking status of ha-898615 ...
	I1009 19:37:50.402810  205762 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:37:50.423046  205762 status.go:371] ha-898615 host status = "Running" (err=<nil>)
	I1009 19:37:50.423077  205762 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:37:50.423354  205762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:37:50.442217  205762 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:37:50.442559  205762 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:37:50.442627  205762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:37:50.461677  205762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:37:50.562693  205762 ssh_runner.go:195] Run: systemctl --version
	I1009 19:37:50.569077  205762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:37:50.581972  205762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:37:50.641627  205762 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:37:50.630415998 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 19:37:50.642037  205762 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:37:50.642063  205762 api_server.go:166] Checking apiserver status ...
	I1009 19:37:50.642097  205762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 19:37:50.652934  205762 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:37:50.652977  205762 status.go:463] ha-898615 apiserver status = Running (err=<nil>)
	I1009 19:37:50.652988  205762 status.go:176] ha-898615 status: &{Name:ha-898615 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 19:37:50.658804  141519 retry.go:31] will retry after 2.836646559s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5: exit status 6 (297.080534ms)

                                                
                                                
-- stdout --
	ha-898615
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:37:53.539841  205881 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:37:53.540124  205881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:53.540135  205881 out.go:374] Setting ErrFile to fd 2...
	I1009 19:37:53.540140  205881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:53.540410  205881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:37:53.540631  205881 out.go:368] Setting JSON to false
	I1009 19:37:53.540660  205881 mustload.go:65] Loading cluster: ha-898615
	I1009 19:37:53.540847  205881 notify.go:221] Checking for updates...
	I1009 19:37:53.541119  205881 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:37:53.541136  205881 status.go:174] checking status of ha-898615 ...
	I1009 19:37:53.541656  205881 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:37:53.560516  205881 status.go:371] ha-898615 host status = "Running" (err=<nil>)
	I1009 19:37:53.560555  205881 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:37:53.560888  205881 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:37:53.578595  205881 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:37:53.578942  205881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:37:53.578990  205881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:37:53.596655  205881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:37:53.698179  205881 ssh_runner.go:195] Run: systemctl --version
	I1009 19:37:53.704675  205881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:37:53.717461  205881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:37:53.775887  205881 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:37:53.765203507 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 19:37:53.776365  205881 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:37:53.776416  205881 api_server.go:166] Checking apiserver status ...
	I1009 19:37:53.776462  205881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 19:37:53.787257  205881 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:37:53.787280  205881 status.go:463] ha-898615 apiserver status = Running (err=<nil>)
	I1009 19:37:53.787291  205881 status.go:176] ha-898615 status: &{Name:ha-898615 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 19:37:53.793041  141519 retry.go:31] will retry after 7.496996409s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5: exit status 6 (307.228291ms)

                                                
                                                
-- stdout --
	ha-898615
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:38:01.339541  206038 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:38:01.340020  206038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:01.340036  206038 out.go:374] Setting ErrFile to fd 2...
	I1009 19:38:01.340042  206038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:01.340313  206038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:38:01.340567  206038 out.go:368] Setting JSON to false
	I1009 19:38:01.340605  206038 mustload.go:65] Loading cluster: ha-898615
	I1009 19:38:01.340758  206038 notify.go:221] Checking for updates...
	I1009 19:38:01.341043  206038 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:01.341062  206038 status.go:174] checking status of ha-898615 ...
	I1009 19:38:01.341542  206038 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:01.361630  206038 status.go:371] ha-898615 host status = "Running" (err=<nil>)
	I1009 19:38:01.361664  206038 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:01.361926  206038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:01.380746  206038 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:01.381085  206038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:38:01.381150  206038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:01.399543  206038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:01.501879  206038 ssh_runner.go:195] Run: systemctl --version
	I1009 19:38:01.508505  206038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:38:01.521338  206038 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:01.582215  206038 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:38:01.571603155 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 19:38:01.582636  206038 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:01.582669  206038 api_server.go:166] Checking apiserver status ...
	I1009 19:38:01.582704  206038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 19:38:01.594338  206038 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:38:01.594365  206038 status.go:463] ha-898615 apiserver status = Running (err=<nil>)
	I1009 19:38:01.594390  206038 status.go:176] ha-898615 status: &{Name:ha-898615 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 19:38:01.600979  141519 retry.go:31] will retry after 6.409913525s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5: exit status 6 (307.746966ms)

                                                
                                                
-- stdout --
	ha-898615
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:38:08.059856  206183 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:38:08.060104  206183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:08.060113  206183 out.go:374] Setting ErrFile to fd 2...
	I1009 19:38:08.060117  206183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:08.060357  206183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:38:08.060547  206183 out.go:368] Setting JSON to false
	I1009 19:38:08.060575  206183 mustload.go:65] Loading cluster: ha-898615
	I1009 19:38:08.060755  206183 notify.go:221] Checking for updates...
	I1009 19:38:08.060960  206183 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:08.060978  206183 status.go:174] checking status of ha-898615 ...
	I1009 19:38:08.061441  206183 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:08.080802  206183 status.go:371] ha-898615 host status = "Running" (err=<nil>)
	I1009 19:38:08.080848  206183 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:08.081157  206183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:08.099759  206183 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:08.100075  206183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:38:08.100125  206183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:08.118325  206183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:08.220413  206183 ssh_runner.go:195] Run: systemctl --version
	I1009 19:38:08.227297  206183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:38:08.240338  206183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:08.301112  206183 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:38:08.289548336 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 19:38:08.301574  206183 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:08.301606  206183 api_server.go:166] Checking apiserver status ...
	I1009 19:38:08.301646  206183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 19:38:08.313023  206183 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:38:08.313056  206183 status.go:463] ha-898615 apiserver status = Running (err=<nil>)
	I1009 19:38:08.313068  206183 status.go:176] ha-898615 status: &{Name:ha-898615 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 19:38:08.319327  141519 retry.go:31] will retry after 14.093143036s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5: exit status 6 (305.19445ms)

                                                
                                                
-- stdout --
	ha-898615
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:38:22.464448  206370 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:38:22.464744  206370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:22.464755  206370 out.go:374] Setting ErrFile to fd 2...
	I1009 19:38:22.464759  206370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:22.465033  206370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:38:22.465284  206370 out.go:368] Setting JSON to false
	I1009 19:38:22.465314  206370 mustload.go:65] Loading cluster: ha-898615
	I1009 19:38:22.465465  206370 notify.go:221] Checking for updates...
	I1009 19:38:22.465811  206370 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:22.465831  206370 status.go:174] checking status of ha-898615 ...
	I1009 19:38:22.466309  206370 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:22.485112  206370 status.go:371] ha-898615 host status = "Running" (err=<nil>)
	I1009 19:38:22.485137  206370 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:22.485421  206370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:22.505048  206370 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:22.505544  206370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:38:22.505619  206370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:22.524533  206370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:22.627530  206370 ssh_runner.go:195] Run: systemctl --version
	I1009 19:38:22.634508  206370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:38:22.647603  206370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:22.705599  206370 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:38:22.695279906 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 19:38:22.706041  206370 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:22.706072  206370 api_server.go:166] Checking apiserver status ...
	I1009 19:38:22.706108  206370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 19:38:22.716714  206370 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:38:22.716735  206370 status.go:463] ha-898615 apiserver status = Running (err=<nil>)
	I1009 19:38:22.716746  206370 status.go:176] ha-898615 status: &{Name:ha-898615 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 19:38:22.722799  141519 retry.go:31] will retry after 8.650027483s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5: exit status 6 (302.747993ms)

                                                
                                                
-- stdout --
	ha-898615
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:38:31.423234  206530 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:38:31.423528  206530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:31.423539  206530 out.go:374] Setting ErrFile to fd 2...
	I1009 19:38:31.423543  206530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:31.423804  206530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:38:31.424066  206530 out.go:368] Setting JSON to false
	I1009 19:38:31.424097  206530 mustload.go:65] Loading cluster: ha-898615
	I1009 19:38:31.424289  206530 notify.go:221] Checking for updates...
	I1009 19:38:31.424603  206530 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:31.424627  206530 status.go:174] checking status of ha-898615 ...
	I1009 19:38:31.425120  206530 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:31.444026  206530 status.go:371] ha-898615 host status = "Running" (err=<nil>)
	I1009 19:38:31.444050  206530 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:31.444310  206530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:31.461726  206530 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:31.462043  206530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:38:31.462100  206530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:31.480424  206530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:31.584312  206530 ssh_runner.go:195] Run: systemctl --version
	I1009 19:38:31.591295  206530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:38:31.604738  206530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:31.661859  206530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:38:31.652176036 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 19:38:31.662474  206530 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:31.662518  206530 api_server.go:166] Checking apiserver status ...
	I1009 19:38:31.662560  206530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 19:38:31.673467  206530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:38:31.673495  206530 status.go:463] ha-898615 apiserver status = Running (err=<nil>)
	I1009 19:38:31.673511  206530 status.go:176] ha-898615 status: &{Name:ha-898615 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:27:46.220087387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39eaa0a5ee22c7c1e0e0329f3f944afa2ddef6cd571ebd9f0aa050805b81a54f",
	            "SandboxKey": "/var/run/docker/netns/39eaa0a5ee22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:31:dc:da:08:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "7930ac5db0626d63671f2a77753202141442de70603e1b4acf6213e0bd34944d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 6 (311.712517ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:38:31.993021  206655 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-158523 image ls --format table --alsologtostderr                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ delete  │ -p functional-158523                                                                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │ 09 Oct 25 19:27 UTC │
	│ start   │ ha-898615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- rollout status deployment/busybox                                                          │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node add --alsologtostderr -v 5                                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node stop m02 --alsologtostderr -v 5                                                                  │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node start m02 --alsologtostderr -v 5                                                                 │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:27:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:27:40.840216  194626 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:40.840464  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840473  194626 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:40.840478  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840669  194626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:27:40.841171  194626 out.go:368] Setting JSON to false
	I1009 19:27:40.842061  194626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4210,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:27:40.842167  194626 start.go:143] virtualization: kvm guest
	I1009 19:27:40.844410  194626 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:27:40.845759  194626 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:40.845801  194626 notify.go:221] Checking for updates...
	I1009 19:27:40.848757  194626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:40.850175  194626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:27:40.851491  194626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:27:40.852705  194626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:27:40.853892  194626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:40.855321  194626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:40.878925  194626 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:27:40.879089  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:40.938194  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.927865936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:40.938299  194626 docker.go:319] overlay module found
	I1009 19:27:40.940236  194626 out.go:179] * Using the docker driver based on user configuration
	I1009 19:27:40.941822  194626 start.go:309] selected driver: docker
	I1009 19:27:40.941843  194626 start.go:930] validating driver "docker" against <nil>
	I1009 19:27:40.941856  194626 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:40.942433  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:41.005454  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.995221815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:41.005685  194626 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 19:27:41.005966  194626 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:41.007942  194626 out.go:179] * Using Docker driver with root privileges
	I1009 19:27:41.009445  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:41.009493  194626 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:27:41.009501  194626 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:27:41.009585  194626 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 19:27:41.010949  194626 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:27:41.012190  194626 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:41.013322  194626 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:41.014388  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.014435  194626 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:27:41.014445  194626 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:41.014436  194626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:41.014539  194626 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:27:41.014554  194626 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:41.014900  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:41.014929  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json: {Name:mk248a27bef73ecdf1bd71857a83cb8ef52e81ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:41.034874  194626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:41.034900  194626 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:41.034920  194626 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:41.034961  194626 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:41.035085  194626 start.go:365] duration metric: took 102.16µs to acquireMachinesLock for "ha-898615"
	I1009 19:27:41.035119  194626 start.go:94] Provisioning new machine with config: &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:41.035201  194626 start.go:126] createHost starting for "" (driver="docker")
	I1009 19:27:41.037427  194626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:27:41.037659  194626 start.go:160] libmachine.API.Create for "ha-898615" (driver="docker")
	I1009 19:27:41.037695  194626 client.go:168] LocalClient.Create starting
	I1009 19:27:41.037764  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
	I1009 19:27:41.037804  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037821  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.037887  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
	I1009 19:27:41.037913  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037927  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.038348  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:27:41.055472  194626 cli_runner.go:211] docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:27:41.055548  194626 network_create.go:284] running [docker network inspect ha-898615] to gather additional debugging logs...
	I1009 19:27:41.055569  194626 cli_runner.go:164] Run: docker network inspect ha-898615
	W1009 19:27:41.074024  194626 cli_runner.go:211] docker network inspect ha-898615 returned with exit code 1
	I1009 19:27:41.074056  194626 network_create.go:287] error running [docker network inspect ha-898615]: docker network inspect ha-898615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-898615 not found
	I1009 19:27:41.074071  194626 network_create.go:289] output of [docker network inspect ha-898615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-898615 not found
	
	** /stderr **
	I1009 19:27:41.074158  194626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:41.091849  194626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a266e0}
	I1009 19:27:41.091899  194626 network_create.go:124] attempt to create docker network ha-898615 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 19:27:41.091955  194626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-898615 ha-898615
	I1009 19:27:41.153434  194626 network_create.go:108] docker network ha-898615 192.168.49.0/24 created
	I1009 19:27:41.153469  194626 kic.go:121] calculated static IP "192.168.49.2" for the "ha-898615" container
	I1009 19:27:41.153533  194626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:27:41.170560  194626 cli_runner.go:164] Run: docker volume create ha-898615 --label name.minikube.sigs.k8s.io=ha-898615 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:27:41.190645  194626 oci.go:103] Successfully created a docker volume ha-898615
	I1009 19:27:41.190737  194626 cli_runner.go:164] Run: docker run --rm --name ha-898615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --entrypoint /usr/bin/test -v ha-898615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:27:41.600938  194626 oci.go:107] Successfully prepared a docker volume ha-898615
	I1009 19:27:41.600967  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.600993  194626 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:27:41.601055  194626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:27:46.106521  194626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.505414435s)
	I1009 19:27:46.106562  194626 kic.go:203] duration metric: took 4.50556336s to extract preloaded images to volume ...
	W1009 19:27:46.106658  194626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 19:27:46.106697  194626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 19:27:46.106744  194626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:27:46.168307  194626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-898615 --name ha-898615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-898615 --network ha-898615 --ip 192.168.49.2 --volume ha-898615:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:27:46.438758  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Running}}
	I1009 19:27:46.461401  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.481444  194626 cli_runner.go:164] Run: docker exec ha-898615 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:27:46.527865  194626 oci.go:144] the created container "ha-898615" has a running status.
	I1009 19:27:46.527897  194626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa...
	I1009 19:27:46.644201  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:27:46.644249  194626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:27:46.674019  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.696195  194626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:27:46.696225  194626 kic_runner.go:114] Args: [docker exec --privileged ha-898615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:27:46.751886  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.774821  194626 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:46.774929  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.795147  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.795497  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.795517  194626 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:46.948463  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:46.948514  194626 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:27:46.948583  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.967551  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.967801  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.967821  194626 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:27:47.127119  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:47.127192  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.145629  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.145957  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.145990  194626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:47.294310  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:47.294342  194626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:27:47.294368  194626 ubuntu.go:190] setting up certificates
	I1009 19:27:47.294401  194626 provision.go:84] configureAuth start
	I1009 19:27:47.294454  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:47.312608  194626 provision.go:143] copyHostCerts
	I1009 19:27:47.312651  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312684  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:27:47.312731  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312817  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:27:47.312923  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.312952  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:27:47.312962  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.313014  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:27:47.313086  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313114  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:27:47.313124  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313163  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:27:47.313236  194626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:27:47.539620  194626 provision.go:177] copyRemoteCerts
	I1009 19:27:47.539697  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:47.539740  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.557819  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:47.662665  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:47.662781  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:47.683372  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:47.683457  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:47.701669  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:47.701736  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:47.720164  194626 provision.go:87] duration metric: took 425.744193ms to configureAuth
	I1009 19:27:47.720200  194626 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:47.720451  194626 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:47.720564  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.738437  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.738688  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.738714  194626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:48.000107  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:48.000134  194626 machine.go:96] duration metric: took 1.225290325s to provisionDockerMachine
	I1009 19:27:48.000148  194626 client.go:171] duration metric: took 6.962444548s to LocalClient.Create
	I1009 19:27:48.000174  194626 start.go:168] duration metric: took 6.962517267s to libmachine.API.Create "ha-898615"
	I1009 19:27:48.000183  194626 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:27:48.000199  194626 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:48.000266  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:48.000306  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.017815  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.123997  194626 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:48.127754  194626 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:48.127793  194626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:48.127806  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:27:48.127864  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:27:48.127968  194626 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:27:48.127983  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:27:48.128079  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:48.136280  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:48.157989  194626 start.go:297] duration metric: took 157.790225ms for postStartSetup
	I1009 19:27:48.158323  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.176302  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:48.176728  194626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:48.176785  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.195218  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.296952  194626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.301724  194626 start.go:129] duration metric: took 7.26650252s to createHost
	I1009 19:27:48.301754  194626 start.go:84] releasing machines lock for "ha-898615", held for 7.266653034s
	I1009 19:27:48.301837  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.319813  194626 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:48.319860  194626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.319866  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.319919  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.339000  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.339730  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.494727  194626 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:48.501535  194626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.537367  194626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.542466  194626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.542536  194626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.568823  194626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:27:48.568848  194626 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.568888  194626 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:27:48.568936  194626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.585765  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.598310  194626 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.598367  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.615925  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.634773  194626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:48.716369  194626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:48.805777  194626 docker.go:234] disabling docker service ...
	I1009 19:27:48.805849  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:48.826709  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:48.840263  194626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:48.923854  194626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.007492  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.020704  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.035117  194626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.035178  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.045691  194626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:27:49.045759  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.055054  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.064210  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.073237  194626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:49.081510  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.090399  194626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.104388  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.113513  194626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:49.121175  194626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:49.128977  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.201984  194626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:49.310640  194626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:49.310716  194626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:49.315028  194626 start.go:564] Will wait 60s for crictl version
	I1009 19:27:49.315098  194626 ssh_runner.go:195] Run: which crictl
	I1009 19:27:49.319134  194626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:49.344131  194626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:49.344210  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.374167  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.406880  194626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:49.408191  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:49.425730  194626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:49.430174  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.441269  194626 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:49.441374  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:49.441444  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.474537  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.474563  194626 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:49.474626  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.501408  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.501432  194626 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:49.501440  194626 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:49.501565  194626 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:49.501644  194626 ssh_runner.go:195] Run: crio config
	I1009 19:27:49.548225  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:49.548247  194626 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:27:49.548270  194626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:49.548295  194626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:49.548487  194626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:49.548516  194626 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:49.548561  194626 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:49.560569  194626 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:49.560666  194626 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:49.560713  194626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:49.568926  194626 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:49.569017  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:49.577516  194626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:49.590951  194626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:49.606324  194626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:27:49.618986  194626 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 19:27:49.633526  194626 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:49.637412  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.647415  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.727039  194626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:49.749827  194626 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:27:49.749848  194626 certs.go:195] generating shared ca certs ...
	I1009 19:27:49.749916  194626 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.750063  194626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:27:49.750105  194626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:27:49.750114  194626 certs.go:257] generating profile certs ...
	I1009 19:27:49.750166  194626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:27:49.750191  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt with IP's: []
	I1009 19:27:49.922286  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt ...
	I1009 19:27:49.922322  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt: {Name:mke5f0b5d846145d5885091c3fdef11a03b4705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923243  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key ...
	I1009 19:27:49.923266  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key: {Name:mkb5fe559278888df39f6f81eb44f11c5b40eebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923364  194626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38
	I1009 19:27:49.923404  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 19:27:50.174751  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 ...
	I1009 19:27:50.174785  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38: {Name:mk7158926fc69073b0db0c318bd9373ec8743788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.174962  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 ...
	I1009 19:27:50.174976  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38: {Name:mk3dc2ee28c7f607abddd15bf98fe46423492612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.175056  194626 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:27:50.175135  194626 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:27:50.175189  194626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:27:50.175204  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt with IP's: []
	I1009 19:27:50.225715  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt ...
	I1009 19:27:50.225750  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt: {Name:mk61103353c5c3a0cbf54b18a910b0c83f19ee7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.225929  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key ...
	I1009 19:27:50.225941  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key: {Name:mkbe3072f67f16772d02b0025253c832dee4c437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.226013  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.226031  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.226044  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.226054  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.226067  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.226077  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.226088  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.226099  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.226151  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:27:50.226183  194626 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.226195  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:27:50.226219  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.226240  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.226260  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:27:50.226298  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:50.226322  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.226336  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.226348  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.226853  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:50.245895  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:50.263570  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:50.281671  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:27:50.300223  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:27:50.317734  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:27:50.335066  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:50.352704  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:27:50.370437  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:27:50.389421  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:50.406983  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:27:50.425614  194626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:50.439040  194626 ssh_runner.go:195] Run: openssl version
	I1009 19:27:50.445304  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:50.454013  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457873  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457929  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.491581  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:50.500958  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:27:50.509597  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513479  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513541  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.548203  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:50.557261  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:27:50.565908  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570090  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570139  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.604463  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:50.613756  194626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:50.617456  194626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:27:50.617519  194626 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:50.617611  194626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:50.617699  194626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:50.651007  194626 cri.go:89] found id: ""
	I1009 19:27:50.651094  194626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:50.660625  194626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:27:50.669536  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:27:50.669585  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:27:50.678565  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:27:50.678583  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:27:50.678632  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:27:50.686673  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:27:50.686739  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:27:50.694481  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:27:50.702511  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:27:50.702571  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:27:50.711335  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.719367  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:27:50.719437  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.727079  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:27:50.735773  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:27:50.735828  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:27:50.743402  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:27:50.783476  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:27:50.783553  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:27:50.804988  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:27:50.805072  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:27:50.805120  194626 kubeadm.go:318] OS: Linux
	I1009 19:27:50.805170  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:27:50.805244  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:27:50.805294  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:27:50.805368  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:27:50.805464  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:27:50.805570  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:27:50.805644  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:27:50.805709  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:27:50.865026  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:27:50.865138  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:27:50.865228  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:27:50.872900  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:27:50.874725  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:27:50.874828  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:27:50.874946  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:27:51.069934  194626 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:27:51.167065  194626 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:27:51.280550  194626 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:27:51.481119  194626 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:27:51.788062  194626 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:27:51.788230  194626 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:51.872181  194626 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:27:51.872362  194626 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:52.084923  194626 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:27:52.283000  194626 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:27:52.633617  194626 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:27:52.633683  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:27:52.795142  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:27:52.946617  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:27:53.060032  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:27:53.125689  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:27:53.322626  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:27:53.323225  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:27:53.325508  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:27:53.328645  194626 out.go:252]   - Booting up control plane ...
	I1009 19:27:53.328749  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:27:53.328835  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:27:53.328914  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:27:53.343297  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:27:53.343474  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:27:53.350427  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:27:53.350584  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:27:53.350658  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:27:53.451355  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:27:53.451572  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:27:54.452237  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000953428s
	I1009 19:27:54.456489  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:27:54.456634  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:27:54.456766  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:27:54.456840  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:31:54.457899  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	I1009 19:31:54.458060  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	I1009 19:31:54.458160  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	I1009 19:31:54.458176  194626 kubeadm.go:318] 
	I1009 19:31:54.458302  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:31:54.458440  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:31:54.458603  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:31:54.458813  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:31:54.458922  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:31:54.459044  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:31:54.459063  194626 kubeadm.go:318] 
	I1009 19:31:54.462630  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:54.462803  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:31:54.463587  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1009 19:31:54.463708  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:31:54.463871  194626 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000953428s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:31:54.463985  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:31:57.247677  194626 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.783657743s)
	I1009 19:31:57.247781  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:31:57.261935  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:31:57.261998  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:31:57.270455  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:31:57.270478  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:31:57.270524  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:31:57.278767  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:31:57.278835  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:31:57.286752  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:31:57.295053  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:31:57.295113  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:31:57.303048  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.311557  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:31:57.311631  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.319853  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:31:57.328199  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:31:57.328265  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:31:57.336006  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:31:57.396540  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:57.457937  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:36:00.178531  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:36:00.178718  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:36:00.182333  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:36:00.182408  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:36:00.182502  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:36:00.182552  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:36:00.182582  194626 kubeadm.go:318] OS: Linux
	I1009 19:36:00.182645  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:36:00.182724  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:36:00.182769  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:36:00.182812  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:36:00.182853  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:36:00.182902  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:36:00.182967  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:36:00.183022  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:36:00.183086  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:36:00.183241  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:36:00.183374  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:36:00.183474  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:36:00.185572  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:36:00.185640  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:36:00.185695  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:36:00.185782  194626 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:36:00.185857  194626 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:36:00.185922  194626 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:36:00.185977  194626 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:36:00.186037  194626 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:36:00.186100  194626 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:36:00.186172  194626 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:36:00.186259  194626 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:36:00.186320  194626 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:36:00.186413  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:36:00.186457  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:36:00.186503  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:36:00.186567  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:36:00.186661  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:36:00.186746  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:36:00.186865  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:36:00.186965  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:36:00.188328  194626 out.go:252]   - Booting up control plane ...
	I1009 19:36:00.188439  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:36:00.188511  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:36:00.188565  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:36:00.188647  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:36:00.188734  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:36:00.188835  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:36:00.188946  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:36:00.188994  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:36:00.189107  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:36:00.189207  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:36:00.189257  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.196157ms
	I1009 19:36:00.189351  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:36:00.189534  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:36:00.189618  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:36:00.189686  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:36:00.189753  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	I1009 19:36:00.189820  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	I1009 19:36:00.189897  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	I1009 19:36:00.189910  194626 kubeadm.go:318] 
	I1009 19:36:00.190035  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:36:00.190115  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:36:00.190192  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:36:00.190268  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:36:00.190333  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:36:00.190440  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:36:00.190476  194626 kubeadm.go:318] 
	I1009 19:36:00.190536  194626 kubeadm.go:402] duration metric: took 8m9.573023353s to StartCluster
	I1009 19:36:00.190620  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:36:00.190688  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:36:00.221163  194626 cri.go:89] found id: ""
	I1009 19:36:00.221200  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.221211  194626 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:36:00.221218  194626 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:36:00.221282  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:36:00.248439  194626 cri.go:89] found id: ""
	I1009 19:36:00.248473  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.248486  194626 logs.go:284] No container was found matching "etcd"
	I1009 19:36:00.248498  194626 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:36:00.248564  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:36:00.275751  194626 cri.go:89] found id: ""
	I1009 19:36:00.275781  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.275792  194626 logs.go:284] No container was found matching "coredns"
	I1009 19:36:00.275801  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:36:00.275868  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:36:00.304183  194626 cri.go:89] found id: ""
	I1009 19:36:00.304218  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.304227  194626 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:36:00.304233  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:36:00.304286  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:36:00.333120  194626 cri.go:89] found id: ""
	I1009 19:36:00.333154  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.333164  194626 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:36:00.333171  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:36:00.333221  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:36:00.362504  194626 cri.go:89] found id: ""
	I1009 19:36:00.362527  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.362536  194626 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:36:00.362542  194626 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:36:00.362602  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:36:00.391912  194626 cri.go:89] found id: ""
	I1009 19:36:00.391939  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.391949  194626 logs.go:284] No container was found matching "kindnet"
	I1009 19:36:00.391964  194626 logs.go:123] Gathering logs for kubelet ...
	I1009 19:36:00.391982  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:36:00.460680  194626 logs.go:123] Gathering logs for dmesg ...
	I1009 19:36:00.460722  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:36:00.473600  194626 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:36:00.473634  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:36:00.538515  194626 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:36:00.538542  194626 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:36:00.538556  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:36:00.601750  194626 logs.go:123] Gathering logs for container status ...
	I1009 19:36:00.601793  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:36:00.631755  194626 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:36:00.631818  194626 out.go:285] * 
	W1009 19:36:00.631894  194626 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.631914  194626 out.go:285] * 
	W1009 19:36:00.633671  194626 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:36:00.637455  194626 out.go:203] 
	W1009 19:36:00.638829  194626 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.638866  194626 out.go:285] * 
	I1009 19:36:00.640615  194626 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:38:24 ha-898615 crio[777]: time="2025-10-09T19:38:24.023518268Z" level=info msg="createCtr: removing container 470119fc7ec296bff35a38c0eddcd6a727d3b440514f907f04c96bb17acb7025" id=fb68ee05-83f3-4a3e-968c-90a82192b9d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:24 ha-898615 crio[777]: time="2025-10-09T19:38:24.023554685Z" level=info msg="createCtr: deleting container 470119fc7ec296bff35a38c0eddcd6a727d3b440514f907f04c96bb17acb7025 from storage" id=fb68ee05-83f3-4a3e-968c-90a82192b9d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:24 ha-898615 crio[777]: time="2025-10-09T19:38:24.025732674Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-898615_kube-system_aad558bc9f2efde8d3c90d373798de18_0" id=fb68ee05-83f3-4a3e-968c-90a82192b9d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:30 ha-898615 crio[777]: time="2025-10-09T19:38:30.000780755Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=97e97fb6-52d2-4029-923b-b3f69b00ce86 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:30 ha-898615 crio[777]: time="2025-10-09T19:38:30.001819051Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=cc81f79a-2e40-4e18-84c6-9b1db59635bd name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:30 ha-898615 crio[777]: time="2025-10-09T19:38:30.002828294Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-898615/kube-scheduler" id=aee1654c-8287-4806-81a3-d0eacb54c0ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:30 ha-898615 crio[777]: time="2025-10-09T19:38:30.003084578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:30 ha-898615 crio[777]: time="2025-10-09T19:38:30.006431148Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:30 ha-898615 crio[777]: time="2025-10-09T19:38:30.006857026Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:30 ha-898615 crio[777]: time="2025-10-09T19:38:30.022990099Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=aee1654c-8287-4806-81a3-d0eacb54c0ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:30 ha-898615 crio[777]: time="2025-10-09T19:38:30.024515239Z" level=info msg="createCtr: deleting container ID 0f3a0570bfa904f35933a304cb8981379f4346efc0580b69dc2f1064bb06d79c from idIndex" id=aee1654c-8287-4806-81a3-d0eacb54c0ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:30 ha-898615 crio[777]: time="2025-10-09T19:38:30.024556602Z" level=info msg="createCtr: removing container 0f3a0570bfa904f35933a304cb8981379f4346efc0580b69dc2f1064bb06d79c" id=aee1654c-8287-4806-81a3-d0eacb54c0ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:30 ha-898615 crio[777]: time="2025-10-09T19:38:30.024595223Z" level=info msg="createCtr: deleting container 0f3a0570bfa904f35933a304cb8981379f4346efc0580b69dc2f1064bb06d79c from storage" id=aee1654c-8287-4806-81a3-d0eacb54c0ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:30 ha-898615 crio[777]: time="2025-10-09T19:38:30.026840738Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-898615_kube-system_bc0e16a1814c5389485436acdfc968ed_0" id=aee1654c-8287-4806-81a3-d0eacb54c0ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.000680189Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=fa92d1a6-78c9-4bab-9d04-5cfb3a072a8b name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.001726528Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=450a45a1-fb78-47f1-9905-175375c01971 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.002613061Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-898615/kube-controller-manager" id=ccd99405-9af9-4fb7-ba20-b8260017d020 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.002870006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.006780157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.007423464Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.02189118Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ccd99405-9af9-4fb7-ba20-b8260017d020 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.023315447Z" level=info msg="createCtr: deleting container ID 59492bb2b6fc8df05994b2ba11fa82dd5b67e32b91f202be585b4a62dfd0b19c from idIndex" id=ccd99405-9af9-4fb7-ba20-b8260017d020 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.023351053Z" level=info msg="createCtr: removing container 59492bb2b6fc8df05994b2ba11fa82dd5b67e32b91f202be585b4a62dfd0b19c" id=ccd99405-9af9-4fb7-ba20-b8260017d020 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.023395467Z" level=info msg="createCtr: deleting container 59492bb2b6fc8df05994b2ba11fa82dd5b67e32b91f202be585b4a62dfd0b19c from storage" id=ccd99405-9af9-4fb7-ba20-b8260017d020 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.025809621Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=ccd99405-9af9-4fb7-ba20-b8260017d020 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:38:32.602820    4591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:38:32.603328    4591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:38:32.604867    4591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:38:32.605410    4591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:38:32.607037    4591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:38:32 up  1:21,  0 user,  load average: 0.01, 0.04, 1.62
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:38:24 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:38:24 ha-898615 kubelet[1937]: E1009 19:38:24.026167    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-898615" podUID="aad558bc9f2efde8d3c90d373798de18"
	Oct 09 19:38:27 ha-898615 kubelet[1937]: E1009 19:38:27.633747    1937 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 19:38:28 ha-898615 kubelet[1937]: E1009 19:38:28.150658    1937 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 19:38:29 ha-898615 kubelet[1937]: E1009 19:38:29.723771    1937 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-898615.186ce986e63acb66  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-898615,UID:ha-898615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-898615 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-898615,},FirstTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,LastTimestamp:2025-10-09 19:31:59.992523622 +0000 UTC m=+0.320486905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-898615,}"
	Oct 09 19:38:30 ha-898615 kubelet[1937]: E1009 19:38:30.000244    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:38:30 ha-898615 kubelet[1937]: E1009 19:38:30.024989    1937 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	Oct 09 19:38:30 ha-898615 kubelet[1937]: E1009 19:38:30.027201    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:38:30 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:38:30 ha-898615 kubelet[1937]:  > podSandboxID="d9cf0054a77eb17087a85fb70ade0aa16f7510c69fabd94329449a3f5ee8df1b"
	Oct 09 19:38:30 ha-898615 kubelet[1937]: E1009 19:38:30.027344    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:38:30 ha-898615 kubelet[1937]:         container kube-scheduler start failed in pod kube-scheduler-ha-898615_kube-system(bc0e16a1814c5389485436acdfc968ed): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:38:30 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:38:30 ha-898615 kubelet[1937]: E1009 19:38:30.027399    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-898615" podUID="bc0e16a1814c5389485436acdfc968ed"
	Oct 09 19:38:30 ha-898615 kubelet[1937]: E1009 19:38:30.652418    1937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:38:30 ha-898615 kubelet[1937]: I1009 19:38:30.833529    1937 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:38:30 ha-898615 kubelet[1937]: E1009 19:38:30.833985    1937 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:38:32 ha-898615 kubelet[1937]: E1009 19:38:32.000194    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:38:32 ha-898615 kubelet[1937]: E1009 19:38:32.026173    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:38:32 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:38:32 ha-898615 kubelet[1937]:  > podSandboxID="da9ccac0760165455089feb0a833fa92808edbd25f27148d0daf896a8d20b03b"
	Oct 09 19:38:32 ha-898615 kubelet[1937]: E1009 19:38:32.026296    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:38:32 ha-898615 kubelet[1937]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:38:32 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:38:32 ha-898615 kubelet[1937]: E1009 19:38:32.026347    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 6 (306.553531ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:38:32.999982  206985 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (48.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-898615" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-898615\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-898615\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-898615\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-898615" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-898615\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-898615\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-898615\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:27:46.220087387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39eaa0a5ee22c7c1e0e0329f3f944afa2ddef6cd571ebd9f0aa050805b81a54f",
	            "SandboxKey": "/var/run/docker/netns/39eaa0a5ee22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:31:dc:da:08:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "7930ac5db0626d63671f2a77753202141442de70603e1b4acf6213e0bd34944d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 6 (306.591175ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:38:33.652759  207240 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-158523 image ls --format table --alsologtostderr                                                     │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ image   │ functional-158523 image ls                                                                                      │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ delete  │ -p functional-158523                                                                                            │ functional-158523 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │ 09 Oct 25 19:27 UTC │
	│ start   │ ha-898615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- rollout status deployment/busybox                                                          │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node add --alsologtostderr -v 5                                                                       │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node stop m02 --alsologtostderr -v 5                                                                  │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node start m02 --alsologtostderr -v 5                                                                 │ ha-898615         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:27:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:27:40.840216  194626 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:40.840464  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840473  194626 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:40.840478  194626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:40.840669  194626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:27:40.841171  194626 out.go:368] Setting JSON to false
	I1009 19:27:40.842061  194626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4210,"bootTime":1760033851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:27:40.842167  194626 start.go:143] virtualization: kvm guest
	I1009 19:27:40.844410  194626 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:27:40.845759  194626 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:40.845801  194626 notify.go:221] Checking for updates...
	I1009 19:27:40.848757  194626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:40.850175  194626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:27:40.851491  194626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:27:40.852705  194626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:27:40.853892  194626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:40.855321  194626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:40.878925  194626 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:27:40.879089  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:40.938194  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.927865936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:40.938299  194626 docker.go:319] overlay module found
	I1009 19:27:40.940236  194626 out.go:179] * Using the docker driver based on user configuration
	I1009 19:27:40.941822  194626 start.go:309] selected driver: docker
	I1009 19:27:40.941843  194626 start.go:930] validating driver "docker" against <nil>
	I1009 19:27:40.941856  194626 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:40.942433  194626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:41.005454  194626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:27:40.995221815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:27:41.005685  194626 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 19:27:41.005966  194626 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:41.007942  194626 out.go:179] * Using Docker driver with root privileges
	I1009 19:27:41.009445  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:41.009493  194626 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:27:41.009501  194626 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:27:41.009585  194626 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 19:27:41.010949  194626 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:27:41.012190  194626 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:41.013322  194626 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:41.014388  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.014435  194626 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:27:41.014445  194626 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:41.014436  194626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:41.014539  194626 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:27:41.014554  194626 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:41.014900  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:41.014929  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json: {Name:mk248a27bef73ecdf1bd71857a83cb8ef52e81ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:41.034874  194626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:41.034900  194626 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:41.034920  194626 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:41.034961  194626 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:41.035085  194626 start.go:365] duration metric: took 102.16µs to acquireMachinesLock for "ha-898615"
	I1009 19:27:41.035119  194626 start.go:94] Provisioning new machine with config: &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:41.035201  194626 start.go:126] createHost starting for "" (driver="docker")
	I1009 19:27:41.037427  194626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:27:41.037659  194626 start.go:160] libmachine.API.Create for "ha-898615" (driver="docker")
	I1009 19:27:41.037695  194626 client.go:168] LocalClient.Create starting
	I1009 19:27:41.037764  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
	I1009 19:27:41.037804  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037821  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.037887  194626 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
	I1009 19:27:41.037913  194626 main.go:141] libmachine: Decoding PEM data...
	I1009 19:27:41.037927  194626 main.go:141] libmachine: Parsing certificate...
	I1009 19:27:41.038348  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:27:41.055472  194626 cli_runner.go:211] docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:27:41.055548  194626 network_create.go:284] running [docker network inspect ha-898615] to gather additional debugging logs...
	I1009 19:27:41.055569  194626 cli_runner.go:164] Run: docker network inspect ha-898615
	W1009 19:27:41.074024  194626 cli_runner.go:211] docker network inspect ha-898615 returned with exit code 1
	I1009 19:27:41.074056  194626 network_create.go:287] error running [docker network inspect ha-898615]: docker network inspect ha-898615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-898615 not found
	I1009 19:27:41.074071  194626 network_create.go:289] output of [docker network inspect ha-898615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-898615 not found
	
	** /stderr **
	I1009 19:27:41.074158  194626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:41.091849  194626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a266e0}
	I1009 19:27:41.091899  194626 network_create.go:124] attempt to create docker network ha-898615 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 19:27:41.091955  194626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-898615 ha-898615
	I1009 19:27:41.153434  194626 network_create.go:108] docker network ha-898615 192.168.49.0/24 created
	I1009 19:27:41.153469  194626 kic.go:121] calculated static IP "192.168.49.2" for the "ha-898615" container
	I1009 19:27:41.153533  194626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:27:41.170560  194626 cli_runner.go:164] Run: docker volume create ha-898615 --label name.minikube.sigs.k8s.io=ha-898615 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:27:41.190645  194626 oci.go:103] Successfully created a docker volume ha-898615
	I1009 19:27:41.190737  194626 cli_runner.go:164] Run: docker run --rm --name ha-898615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --entrypoint /usr/bin/test -v ha-898615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:27:41.600938  194626 oci.go:107] Successfully prepared a docker volume ha-898615
	I1009 19:27:41.600967  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:41.600993  194626 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:27:41.601055  194626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:27:46.106521  194626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-898615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.505414435s)
	I1009 19:27:46.106562  194626 kic.go:203] duration metric: took 4.50556336s to extract preloaded images to volume ...
	W1009 19:27:46.106658  194626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 19:27:46.106697  194626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 19:27:46.106744  194626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:27:46.168307  194626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-898615 --name ha-898615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-898615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-898615 --network ha-898615 --ip 192.168.49.2 --volume ha-898615:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:27:46.438758  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Running}}
	I1009 19:27:46.461401  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.481444  194626 cli_runner.go:164] Run: docker exec ha-898615 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:27:46.527865  194626 oci.go:144] the created container "ha-898615" has a running status.
	I1009 19:27:46.527897  194626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa...
	I1009 19:27:46.644201  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:27:46.644249  194626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:27:46.674019  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.696195  194626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:27:46.696225  194626 kic_runner.go:114] Args: [docker exec --privileged ha-898615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:27:46.751886  194626 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:27:46.774821  194626 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:46.774929  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.795147  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.795497  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.795517  194626 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:46.948463  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:46.948514  194626 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:27:46.948583  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:46.967551  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.967801  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:46.967821  194626 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:27:47.127119  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:27:47.127192  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.145629  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.145957  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.145990  194626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:47.294310  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:47.294342  194626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:27:47.294368  194626 ubuntu.go:190] setting up certificates
	I1009 19:27:47.294401  194626 provision.go:84] configureAuth start
	I1009 19:27:47.294454  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:47.312608  194626 provision.go:143] copyHostCerts
	I1009 19:27:47.312651  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312684  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:27:47.312731  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:27:47.312817  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:27:47.312923  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.312952  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:27:47.312962  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:27:47.313014  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:27:47.313086  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313114  194626 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:27:47.313124  194626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:27:47.313163  194626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:27:47.313236  194626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:27:47.539620  194626 provision.go:177] copyRemoteCerts
	I1009 19:27:47.539697  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:47.539740  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.557819  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:47.662665  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:47.662781  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:47.683372  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:47.683457  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:47.701669  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:47.701736  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:47.720164  194626 provision.go:87] duration metric: took 425.744193ms to configureAuth
	I1009 19:27:47.720200  194626 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:47.720451  194626 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:47.720564  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:47.738437  194626 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:47.738688  194626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 19:27:47.738714  194626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:48.000107  194626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:48.000134  194626 machine.go:96] duration metric: took 1.225290325s to provisionDockerMachine
	I1009 19:27:48.000148  194626 client.go:171] duration metric: took 6.962444548s to LocalClient.Create
	I1009 19:27:48.000174  194626 start.go:168] duration metric: took 6.962517267s to libmachine.API.Create "ha-898615"
	I1009 19:27:48.000183  194626 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:27:48.000199  194626 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:48.000266  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:48.000306  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.017815  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.123997  194626 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:48.127754  194626 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:48.127793  194626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:48.127806  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:27:48.127864  194626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:27:48.127968  194626 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:27:48.127983  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:27:48.128079  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:48.136280  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:48.157989  194626 start.go:297] duration metric: took 157.790225ms for postStartSetup
	I1009 19:27:48.158323  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.176302  194626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:27:48.176728  194626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:48.176785  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.195218  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.296952  194626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.301724  194626 start.go:129] duration metric: took 7.26650252s to createHost
	I1009 19:27:48.301754  194626 start.go:84] releasing machines lock for "ha-898615", held for 7.266653034s
	I1009 19:27:48.301837  194626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:27:48.319813  194626 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:48.319860  194626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.319866  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.319919  194626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:27:48.339000  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.339730  194626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:27:48.494727  194626 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:48.501535  194626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.537367  194626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.542466  194626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.542536  194626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.568823  194626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:27:48.568848  194626 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.568888  194626 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:27:48.568936  194626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.585765  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.598310  194626 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.598367  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.615925  194626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.634773  194626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:48.716369  194626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:48.805777  194626 docker.go:234] disabling docker service ...
	I1009 19:27:48.805849  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:48.826709  194626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:48.840263  194626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:48.923854  194626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.007492  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.020704  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.035117  194626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.035178  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.045691  194626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:27:49.045759  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.055054  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.064210  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.073237  194626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:49.081510  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.090399  194626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.104388  194626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.113513  194626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:49.121175  194626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:49.128977  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.201984  194626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:49.310640  194626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:49.310716  194626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:49.315028  194626 start.go:564] Will wait 60s for crictl version
	I1009 19:27:49.315098  194626 ssh_runner.go:195] Run: which crictl
	I1009 19:27:49.319134  194626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:49.344131  194626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:49.344210  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.374167  194626 ssh_runner.go:195] Run: crio --version
	I1009 19:27:49.406880  194626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:49.408191  194626 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:49.425730  194626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:49.430174  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.441269  194626 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:49.441374  194626 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:49.441444  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.474537  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.474563  194626 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:49.474626  194626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:49.501408  194626 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:49.501432  194626 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:49.501440  194626 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:49.501565  194626 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:49.501644  194626 ssh_runner.go:195] Run: crio config
	I1009 19:27:49.548225  194626 cni.go:84] Creating CNI manager for ""
	I1009 19:27:49.548247  194626 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:27:49.548270  194626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:49.548295  194626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:49.548487  194626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:49.548516  194626 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:49.548561  194626 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:49.560569  194626 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:49.560666  194626 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:49.560713  194626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:49.568926  194626 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:49.569017  194626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:49.577516  194626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:49.590951  194626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:49.606324  194626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:27:49.618986  194626 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 19:27:49.633526  194626 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:49.637412  194626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:49.647415  194626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:49.727039  194626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:49.749827  194626 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:27:49.749848  194626 certs.go:195] generating shared ca certs ...
	I1009 19:27:49.749916  194626 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.750063  194626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:27:49.750105  194626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:27:49.750114  194626 certs.go:257] generating profile certs ...
	I1009 19:27:49.750166  194626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:27:49.750191  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt with IP's: []
	I1009 19:27:49.922286  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt ...
	I1009 19:27:49.922322  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt: {Name:mke5f0b5d846145d5885091c3fdef11a03b4705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923243  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key ...
	I1009 19:27:49.923266  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key: {Name:mkb5fe559278888df39f6f81eb44f11c5b40eebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:49.923364  194626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38
	I1009 19:27:49.923404  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 19:27:50.174751  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 ...
	I1009 19:27:50.174785  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38: {Name:mk7158926fc69073b0db0c318bd9373ec8743788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.174962  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 ...
	I1009 19:27:50.174976  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38: {Name:mk3dc2ee28c7f607abddd15bf98fe46423492612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.175056  194626 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:27:50.175135  194626 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.6d512e38 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:27:50.175189  194626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:27:50.175204  194626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt with IP's: []
	I1009 19:27:50.225715  194626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt ...
	I1009 19:27:50.225750  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt: {Name:mk61103353c5c3a0cbf54b18a910b0c83f19ee7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.225929  194626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key ...
	I1009 19:27:50.225941  194626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key: {Name:mkbe3072f67f16772d02b0025253c832dee4c437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.226013  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.226031  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.226044  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.226054  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.226067  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.226077  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.226088  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.226099  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.226151  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:27:50.226183  194626 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.226195  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:27:50.226219  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.226240  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.226260  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:27:50.226298  194626 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:27:50.226322  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.226336  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.226348  194626 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.226853  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:50.245895  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:50.263570  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:50.281671  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:27:50.300223  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:27:50.317734  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:27:50.335066  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:50.352704  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:27:50.370437  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:27:50.389421  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:50.406983  194626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:27:50.425614  194626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:50.439040  194626 ssh_runner.go:195] Run: openssl version
	I1009 19:27:50.445304  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:50.454013  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457873  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.457929  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.491581  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:50.500958  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:27:50.509597  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513479  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.513541  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:27:50.548203  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:50.557261  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:27:50.565908  194626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570090  194626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.570139  194626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:27:50.604463  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:50.613756  194626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:50.617456  194626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:27:50.617519  194626 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:50.617611  194626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:50.617699  194626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:50.651007  194626 cri.go:89] found id: ""
	I1009 19:27:50.651094  194626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:50.660625  194626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:27:50.669536  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:27:50.669585  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:27:50.678565  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:27:50.678583  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:27:50.678632  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:27:50.686673  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:27:50.686739  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:27:50.694481  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:27:50.702511  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:27:50.702571  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:27:50.711335  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.719367  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:27:50.719437  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:27:50.727079  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:27:50.735773  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:27:50.735828  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:27:50.743402  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:27:50.783476  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:27:50.783553  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:27:50.804988  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:27:50.805072  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:27:50.805120  194626 kubeadm.go:318] OS: Linux
	I1009 19:27:50.805170  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:27:50.805244  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:27:50.805294  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:27:50.805368  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:27:50.805464  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:27:50.805570  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:27:50.805644  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:27:50.805709  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:27:50.865026  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:27:50.865138  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:27:50.865228  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:27:50.872900  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:27:50.874725  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:27:50.874828  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:27:50.874946  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:27:51.069934  194626 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:27:51.167065  194626 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:27:51.280550  194626 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:27:51.481119  194626 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:27:51.788062  194626 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:27:51.788230  194626 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:51.872181  194626 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:27:51.872362  194626 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:27:52.084923  194626 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:27:52.283000  194626 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:27:52.633617  194626 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:27:52.633683  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:27:52.795142  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:27:52.946617  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:27:53.060032  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:27:53.125689  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:27:53.322626  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:27:53.323225  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:27:53.325508  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:27:53.328645  194626 out.go:252]   - Booting up control plane ...
	I1009 19:27:53.328749  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:27:53.328835  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:27:53.328914  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:27:53.343297  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:27:53.343474  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:27:53.350427  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:27:53.350584  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:27:53.350658  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:27:53.451355  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:27:53.451572  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:27:54.452237  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000953428s
	I1009 19:27:54.456489  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:27:54.456634  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:27:54.456766  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:27:54.456840  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:31:54.457899  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	I1009 19:31:54.458060  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	I1009 19:31:54.458160  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	I1009 19:31:54.458176  194626 kubeadm.go:318] 
	I1009 19:31:54.458302  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:31:54.458440  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:31:54.458603  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:31:54.458813  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:31:54.458922  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:31:54.459044  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:31:54.459063  194626 kubeadm.go:318] 
	I1009 19:31:54.462630  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:54.462803  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:31:54.463587  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	I1009 19:31:54.463708  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:31:54.463871  194626 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-898615 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000953428s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001112864s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001115029s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00128327s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:31:54.463985  194626 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:31:57.247677  194626 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.783657743s)
	I1009 19:31:57.247781  194626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:31:57.261935  194626 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:31:57.261998  194626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:31:57.270455  194626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:31:57.270478  194626 kubeadm.go:157] found existing configuration files:
	
	I1009 19:31:57.270524  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:31:57.278767  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:31:57.278835  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:31:57.286752  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:31:57.295053  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:31:57.295113  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:31:57.303048  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.311557  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:31:57.311631  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:31:57.319853  194626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:31:57.328199  194626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:31:57.328265  194626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:31:57.336006  194626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:31:57.396540  194626 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:31:57.457937  194626 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:36:00.178531  194626 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:36:00.178718  194626 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:36:00.182333  194626 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:36:00.182408  194626 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:36:00.182502  194626 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:36:00.182552  194626 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:36:00.182582  194626 kubeadm.go:318] OS: Linux
	I1009 19:36:00.182645  194626 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:36:00.182724  194626 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:36:00.182769  194626 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:36:00.182812  194626 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:36:00.182853  194626 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:36:00.182902  194626 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:36:00.182967  194626 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:36:00.183022  194626 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:36:00.183086  194626 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:36:00.183241  194626 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:36:00.183374  194626 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:36:00.183474  194626 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:36:00.185572  194626 out.go:252]   - Generating certificates and keys ...
	I1009 19:36:00.185640  194626 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:36:00.185695  194626 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:36:00.185782  194626 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:36:00.185857  194626 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:36:00.185922  194626 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:36:00.185977  194626 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:36:00.186037  194626 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:36:00.186100  194626 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:36:00.186172  194626 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:36:00.186259  194626 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:36:00.186320  194626 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:36:00.186413  194626 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:36:00.186457  194626 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:36:00.186503  194626 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:36:00.186567  194626 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:36:00.186661  194626 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:36:00.186746  194626 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:36:00.186865  194626 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:36:00.186965  194626 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:36:00.188328  194626 out.go:252]   - Booting up control plane ...
	I1009 19:36:00.188439  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:36:00.188511  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:36:00.188565  194626 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:36:00.188647  194626 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:36:00.188734  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:36:00.188835  194626 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:36:00.188946  194626 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:36:00.188994  194626 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:36:00.189107  194626 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:36:00.189207  194626 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:36:00.189257  194626 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.196157ms
	I1009 19:36:00.189351  194626 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:36:00.189534  194626 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:36:00.189618  194626 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:36:00.189686  194626 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:36:00.189753  194626 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	I1009 19:36:00.189820  194626 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	I1009 19:36:00.189897  194626 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	I1009 19:36:00.189910  194626 kubeadm.go:318] 
	I1009 19:36:00.190035  194626 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:36:00.190115  194626 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:36:00.190192  194626 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:36:00.190268  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:36:00.190333  194626 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:36:00.190440  194626 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:36:00.190476  194626 kubeadm.go:318] 
	I1009 19:36:00.190536  194626 kubeadm.go:402] duration metric: took 8m9.573023353s to StartCluster
	I1009 19:36:00.190620  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:36:00.190688  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:36:00.221163  194626 cri.go:89] found id: ""
	I1009 19:36:00.221200  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.221211  194626 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:36:00.221218  194626 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:36:00.221282  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:36:00.248439  194626 cri.go:89] found id: ""
	I1009 19:36:00.248473  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.248486  194626 logs.go:284] No container was found matching "etcd"
	I1009 19:36:00.248498  194626 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:36:00.248564  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:36:00.275751  194626 cri.go:89] found id: ""
	I1009 19:36:00.275781  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.275792  194626 logs.go:284] No container was found matching "coredns"
	I1009 19:36:00.275801  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:36:00.275868  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:36:00.304183  194626 cri.go:89] found id: ""
	I1009 19:36:00.304218  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.304227  194626 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:36:00.304233  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:36:00.304286  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:36:00.333120  194626 cri.go:89] found id: ""
	I1009 19:36:00.333154  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.333164  194626 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:36:00.333171  194626 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:36:00.333221  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:36:00.362504  194626 cri.go:89] found id: ""
	I1009 19:36:00.362527  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.362536  194626 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:36:00.362542  194626 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:36:00.362602  194626 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:36:00.391912  194626 cri.go:89] found id: ""
	I1009 19:36:00.391939  194626 logs.go:282] 0 containers: []
	W1009 19:36:00.391949  194626 logs.go:284] No container was found matching "kindnet"
	I1009 19:36:00.391964  194626 logs.go:123] Gathering logs for kubelet ...
	I1009 19:36:00.391982  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:36:00.460680  194626 logs.go:123] Gathering logs for dmesg ...
	I1009 19:36:00.460722  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:36:00.473600  194626 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:36:00.473634  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:36:00.538515  194626 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:36:00.530274    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.530873    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532450    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.532972    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:36:00.534550    2544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:36:00.538542  194626 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:36:00.538556  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:36:00.601750  194626 logs.go:123] Gathering logs for container status ...
	I1009 19:36:00.601793  194626 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:36:00.631755  194626 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:36:00.631818  194626 out.go:285] * 
	W1009 19:36:00.631894  194626 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.631914  194626 out.go:285] * 
	W1009 19:36:00.633671  194626 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:36:00.637455  194626 out.go:203] 
	W1009 19:36:00.638829  194626 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.196157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000380593s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000658374s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000650321s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:36:00.638866  194626 out.go:285] * 
	I1009 19:36:00.640615  194626 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:38:30 ha-898615 crio[777]: time="2025-10-09T19:38:30.024556602Z" level=info msg="createCtr: removing container 0f3a0570bfa904f35933a304cb8981379f4346efc0580b69dc2f1064bb06d79c" id=aee1654c-8287-4806-81a3-d0eacb54c0ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:30 ha-898615 crio[777]: time="2025-10-09T19:38:30.024595223Z" level=info msg="createCtr: deleting container 0f3a0570bfa904f35933a304cb8981379f4346efc0580b69dc2f1064bb06d79c from storage" id=aee1654c-8287-4806-81a3-d0eacb54c0ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:30 ha-898615 crio[777]: time="2025-10-09T19:38:30.026840738Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-898615_kube-system_bc0e16a1814c5389485436acdfc968ed_0" id=aee1654c-8287-4806-81a3-d0eacb54c0ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.000680189Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=fa92d1a6-78c9-4bab-9d04-5cfb3a072a8b name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.001726528Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=450a45a1-fb78-47f1-9905-175375c01971 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.002613061Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-898615/kube-controller-manager" id=ccd99405-9af9-4fb7-ba20-b8260017d020 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.002870006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.006780157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.007423464Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.02189118Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ccd99405-9af9-4fb7-ba20-b8260017d020 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.023315447Z" level=info msg="createCtr: deleting container ID 59492bb2b6fc8df05994b2ba11fa82dd5b67e32b91f202be585b4a62dfd0b19c from idIndex" id=ccd99405-9af9-4fb7-ba20-b8260017d020 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.023351053Z" level=info msg="createCtr: removing container 59492bb2b6fc8df05994b2ba11fa82dd5b67e32b91f202be585b4a62dfd0b19c" id=ccd99405-9af9-4fb7-ba20-b8260017d020 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.023395467Z" level=info msg="createCtr: deleting container 59492bb2b6fc8df05994b2ba11fa82dd5b67e32b91f202be585b4a62dfd0b19c from storage" id=ccd99405-9af9-4fb7-ba20-b8260017d020 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:32 ha-898615 crio[777]: time="2025-10-09T19:38:32.025809621Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=ccd99405-9af9-4fb7-ba20-b8260017d020 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:33 ha-898615 crio[777]: time="2025-10-09T19:38:33.001470567Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=841b0df8-5000-4b53-8e8b-b4b3ee56db59 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:33 ha-898615 crio[777]: time="2025-10-09T19:38:33.002433397Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=6793ea6e-10ca-4d66-a9a8-b1a0d0cc0f2a name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:33 ha-898615 crio[777]: time="2025-10-09T19:38:33.003423579Z" level=info msg="Creating container: kube-system/etcd-ha-898615/etcd" id=5e7a3379-da49-494a-a597-437257276ecd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:33 ha-898615 crio[777]: time="2025-10-09T19:38:33.003722685Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:33 ha-898615 crio[777]: time="2025-10-09T19:38:33.007256745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:33 ha-898615 crio[777]: time="2025-10-09T19:38:33.007699681Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:33 ha-898615 crio[777]: time="2025-10-09T19:38:33.026337556Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=5e7a3379-da49-494a-a597-437257276ecd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:33 ha-898615 crio[777]: time="2025-10-09T19:38:33.027836569Z" level=info msg="createCtr: deleting container ID 4502455c5ac23bde0b8a556f1c8c2fa4d4bed82cf39bc2ce25e2b80e763eb30d from idIndex" id=5e7a3379-da49-494a-a597-437257276ecd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:33 ha-898615 crio[777]: time="2025-10-09T19:38:33.027889801Z" level=info msg="createCtr: removing container 4502455c5ac23bde0b8a556f1c8c2fa4d4bed82cf39bc2ce25e2b80e763eb30d" id=5e7a3379-da49-494a-a597-437257276ecd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:33 ha-898615 crio[777]: time="2025-10-09T19:38:33.027934749Z" level=info msg="createCtr: deleting container 4502455c5ac23bde0b8a556f1c8c2fa4d4bed82cf39bc2ce25e2b80e763eb30d from storage" id=5e7a3379-da49-494a-a597-437257276ecd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:33 ha-898615 crio[777]: time="2025-10-09T19:38:33.030562447Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=5e7a3379-da49-494a-a597-437257276ecd name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:38:34.261700    4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:38:34.262080    4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:38:34.263620    4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:38:34.264128    4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:38:34.265482    4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:38:34 up  1:21,  0 user,  load average: 0.49, 0.13, 1.64
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:38:30 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:38:30 ha-898615 kubelet[1937]:  > podSandboxID="d9cf0054a77eb17087a85fb70ade0aa16f7510c69fabd94329449a3f5ee8df1b"
	Oct 09 19:38:30 ha-898615 kubelet[1937]: E1009 19:38:30.027344    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:38:30 ha-898615 kubelet[1937]:         container kube-scheduler start failed in pod kube-scheduler-ha-898615_kube-system(bc0e16a1814c5389485436acdfc968ed): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:38:30 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:38:30 ha-898615 kubelet[1937]: E1009 19:38:30.027399    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-898615" podUID="bc0e16a1814c5389485436acdfc968ed"
	Oct 09 19:38:30 ha-898615 kubelet[1937]: E1009 19:38:30.652418    1937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:38:30 ha-898615 kubelet[1937]: I1009 19:38:30.833529    1937 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:38:30 ha-898615 kubelet[1937]: E1009 19:38:30.833985    1937 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:38:32 ha-898615 kubelet[1937]: E1009 19:38:32.000194    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:38:32 ha-898615 kubelet[1937]: E1009 19:38:32.026173    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:38:32 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:38:32 ha-898615 kubelet[1937]:  > podSandboxID="da9ccac0760165455089feb0a833fa92808edbd25f27148d0daf896a8d20b03b"
	Oct 09 19:38:32 ha-898615 kubelet[1937]: E1009 19:38:32.026296    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:38:32 ha-898615 kubelet[1937]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:38:32 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:38:32 ha-898615 kubelet[1937]: E1009 19:38:32.026347    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	Oct 09 19:38:33 ha-898615 kubelet[1937]: E1009 19:38:33.000964    1937 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:38:33 ha-898615 kubelet[1937]: E1009 19:38:33.030921    1937 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:38:33 ha-898615 kubelet[1937]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:38:33 ha-898615 kubelet[1937]:  > podSandboxID="043d27dc8aae857d8a42667dfed1b409b78957e0ac42335f6d23fbc5540aedfd"
	Oct 09 19:38:33 ha-898615 kubelet[1937]: E1009 19:38:33.031055    1937 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:38:33 ha-898615 kubelet[1937]:         container etcd start failed in pod etcd-ha-898615_kube-system(2d43b86ac03e93e11f35c923e2560103): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:38:33 ha-898615 kubelet[1937]:  > logger="UnhandledError"
	Oct 09 19:38:33 ha-898615 kubelet[1937]: E1009 19:38:33.031102    1937 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-898615" podUID="2d43b86ac03e93e11f35c923e2560103"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 6 (306.522643ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:38:34.647575  207583 status.go:458] kubeconfig endpoint: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-898615 stop --alsologtostderr -v 5: (1.215370716s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 start --wait true --alsologtostderr -v 5
E1009 19:38:37.175588  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:43:37.175655  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 start --wait true --alsologtostderr -v 5: exit status 80 (6m7.499133018s)

                                                
                                                
-- stdout --
	* [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:38:35.975523  207930 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:38:35.975809  207930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:35.975820  207930 out.go:374] Setting ErrFile to fd 2...
	I1009 19:38:35.975824  207930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:35.976017  207930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:38:35.976529  207930 out.go:368] Setting JSON to false
	I1009 19:38:35.977520  207930 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4865,"bootTime":1760033851,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:38:35.977618  207930 start.go:143] virtualization: kvm guest
	I1009 19:38:35.979911  207930 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:38:35.981312  207930 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:38:35.981323  207930 notify.go:221] Checking for updates...
	I1009 19:38:35.983929  207930 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:38:35.985330  207930 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:35.986909  207930 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:38:35.988196  207930 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:38:35.989553  207930 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:38:35.991338  207930 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:35.991495  207930 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:38:36.015602  207930 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:38:36.015757  207930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:36.075307  207930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:38:36.06526946 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:38:36.075425  207930 docker.go:319] overlay module found
	I1009 19:38:36.077367  207930 out.go:179] * Using the docker driver based on existing profile
	I1009 19:38:36.078862  207930 start.go:309] selected driver: docker
	I1009 19:38:36.078876  207930 start.go:930] validating driver "docker" against &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:36.078976  207930 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:38:36.079059  207930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:36.140960  207930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:38:36.131484248 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:38:36.141642  207930 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:38:36.141674  207930 cni.go:84] Creating CNI manager for ""
	I1009 19:38:36.141735  207930 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:38:36.141786  207930 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 19:38:36.143505  207930 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:38:36.144834  207930 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:38:36.146099  207930 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:38:36.147345  207930 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:38:36.147407  207930 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:38:36.147422  207930 cache.go:58] Caching tarball of preloaded images
	I1009 19:38:36.147438  207930 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:38:36.147532  207930 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:38:36.147545  207930 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:38:36.147660  207930 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:38:36.167793  207930 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:38:36.167815  207930 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:38:36.167836  207930 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:38:36.167869  207930 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:36.167944  207930 start.go:365] duration metric: took 50.923µs to acquireMachinesLock for "ha-898615"
	I1009 19:38:36.167966  207930 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:38:36.167995  207930 fix.go:55] fixHost starting: 
	I1009 19:38:36.168216  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:36.186209  207930 fix.go:113] recreateIfNeeded on ha-898615: state=Stopped err=<nil>
	W1009 19:38:36.186255  207930 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:38:36.188183  207930 out.go:252] * Restarting existing docker container for "ha-898615" ...
	I1009 19:38:36.188284  207930 cli_runner.go:164] Run: docker start ha-898615
	I1009 19:38:36.429165  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:36.449048  207930 kic.go:430] container "ha-898615" state is running.
	I1009 19:38:36.449470  207930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:36.468830  207930 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:38:36.469116  207930 machine.go:93] provisionDockerMachine start ...
	I1009 19:38:36.469193  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:36.488569  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:36.488848  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:36.488870  207930 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:38:36.489575  207930 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41826->127.0.0.1:32788: read: connection reset by peer
	I1009 19:38:39.637605  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:38:39.637634  207930 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:38:39.637693  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:39.655862  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:39.656140  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:39.656156  207930 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:38:39.812565  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:38:39.812645  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:39.831046  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:39.831304  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:39.831326  207930 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:38:39.979591  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:38:39.979628  207930 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:38:39.979662  207930 ubuntu.go:190] setting up certificates
	I1009 19:38:39.979675  207930 provision.go:84] configureAuth start
	I1009 19:38:39.979738  207930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:39.997703  207930 provision.go:143] copyHostCerts
	I1009 19:38:39.997746  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:38:39.997777  207930 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:38:39.997806  207930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:38:39.997879  207930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:38:39.997970  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:38:39.997989  207930 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:38:39.997996  207930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:38:39.998029  207930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:38:39.998077  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:38:39.998096  207930 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:38:39.998102  207930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:38:39.998125  207930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:38:39.998178  207930 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:38:40.329941  207930 provision.go:177] copyRemoteCerts
	I1009 19:38:40.330005  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:38:40.330048  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.348609  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:40.453024  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:38:40.453090  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:38:40.471037  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:38:40.471100  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:38:40.488791  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:38:40.488882  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:38:40.506540  207930 provision.go:87] duration metric: took 526.848912ms to configureAuth
	I1009 19:38:40.506573  207930 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:38:40.506763  207930 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:40.506890  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.524930  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:40.525160  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:40.525178  207930 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:38:40.786346  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:38:40.786372  207930 machine.go:96] duration metric: took 4.31723847s to provisionDockerMachine
	I1009 19:38:40.786407  207930 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:38:40.786419  207930 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:38:40.786479  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:38:40.786518  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.804162  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:40.908341  207930 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:38:40.911873  207930 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:38:40.911904  207930 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:38:40.911923  207930 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:38:40.911983  207930 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:38:40.912072  207930 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:38:40.912085  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:38:40.912183  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:38:40.919808  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:38:40.937277  207930 start.go:297] duration metric: took 150.853989ms for postStartSetup
	I1009 19:38:40.937349  207930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:38:40.937424  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.955872  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:41.055635  207930 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:38:41.060222  207930 fix.go:57] duration metric: took 4.892219254s for fixHost
	I1009 19:38:41.060252  207930 start.go:84] releasing machines lock for "ha-898615", held for 4.892295934s
	I1009 19:38:41.060315  207930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:41.078135  207930 ssh_runner.go:195] Run: cat /version.json
	I1009 19:38:41.078202  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:41.078238  207930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:38:41.078301  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:41.096227  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:41.096500  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:41.197266  207930 ssh_runner.go:195] Run: systemctl --version
	I1009 19:38:41.254878  207930 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:38:41.291881  207930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:38:41.296935  207930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:38:41.297063  207930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:38:41.305687  207930 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:38:41.305714  207930 start.go:496] detecting cgroup driver to use...
	I1009 19:38:41.305778  207930 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:38:41.305833  207930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:38:41.320848  207930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:38:41.334341  207930 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:38:41.334430  207930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:38:41.350433  207930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:38:41.364693  207930 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:38:41.444310  207930 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:38:41.525534  207930 docker.go:234] disabling docker service ...
	I1009 19:38:41.525603  207930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:38:41.540323  207930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:38:41.553168  207930 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:38:41.632212  207930 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:38:41.711096  207930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:38:41.724923  207930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:38:41.740807  207930 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:38:41.740860  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.750143  207930 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:38:41.750201  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.759647  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.768954  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.778411  207930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:38:41.786985  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.796139  207930 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.804565  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.813340  207930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:38:41.821627  207930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:38:41.829434  207930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:41.907787  207930 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:38:42.015071  207930 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:38:42.015128  207930 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:38:42.019190  207930 start.go:564] Will wait 60s for crictl version
	I1009 19:38:42.019246  207930 ssh_runner.go:195] Run: which crictl
	I1009 19:38:42.022757  207930 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:38:42.047602  207930 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:38:42.047669  207930 ssh_runner.go:195] Run: crio --version
	I1009 19:38:42.076709  207930 ssh_runner.go:195] Run: crio --version
	I1009 19:38:42.108280  207930 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:38:42.109626  207930 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:38:42.127160  207930 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:38:42.131748  207930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:38:42.142508  207930 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:38:42.142654  207930 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:38:42.142740  207930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:38:42.176610  207930 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:38:42.176633  207930 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:38:42.176682  207930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:38:42.202986  207930 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:38:42.203009  207930 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:38:42.203021  207930 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:38:42.203143  207930 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:38:42.203236  207930 ssh_runner.go:195] Run: crio config
	I1009 19:38:42.252192  207930 cni.go:84] Creating CNI manager for ""
	I1009 19:38:42.252222  207930 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:38:42.252244  207930 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:38:42.252274  207930 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:38:42.252455  207930 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:38:42.252533  207930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:38:42.261297  207930 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:38:42.261376  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:38:42.269740  207930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:38:42.283297  207930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:38:42.296911  207930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:38:42.310550  207930 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:38:42.314737  207930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:38:42.325511  207930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:42.406041  207930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:42.431169  207930 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:38:42.431199  207930 certs.go:195] generating shared ca certs ...
	I1009 19:38:42.431223  207930 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:42.431407  207930 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:38:42.431466  207930 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:38:42.431481  207930 certs.go:257] generating profile certs ...
	I1009 19:38:42.431609  207930 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:38:42.431640  207930 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd
	I1009 19:38:42.431668  207930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 19:38:42.592908  207930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd ...
	I1009 19:38:42.592943  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd: {Name:mkbeae8ef9cb7280e84a8eafb5e4ed5a9f929f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:42.593120  207930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd ...
	I1009 19:38:42.593133  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd: {Name:mk2a0878011f7339a4c02515e180398732017ed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:42.593209  207930 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:38:42.593374  207930 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:38:42.593552  207930 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:38:42.593571  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:38:42.593584  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:38:42.593597  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:38:42.593608  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:38:42.593621  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:38:42.593631  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:38:42.593644  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:38:42.593653  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:38:42.593711  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:38:42.593739  207930 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:38:42.593749  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:38:42.593769  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:38:42.593790  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:38:42.593810  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:38:42.593855  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:38:42.593880  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.593893  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.593905  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.594443  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:38:42.612666  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:38:42.632698  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:38:42.651700  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:38:42.670307  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 19:38:42.688542  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:38:42.707425  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:38:42.725187  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:38:42.743982  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:38:42.762241  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:38:42.780038  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:38:42.798203  207930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:38:42.811431  207930 ssh_runner.go:195] Run: openssl version
	I1009 19:38:42.818209  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:38:42.827928  207930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.831986  207930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.832055  207930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.866756  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:38:42.875613  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:38:42.884840  207930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.888808  207930 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.888867  207930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.923046  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:38:42.932828  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:38:42.943293  207930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.948839  207930 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.948923  207930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.995613  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:38:43.005104  207930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:38:43.009159  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:38:43.044274  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:38:43.079173  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:38:43.114440  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:38:43.149740  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:38:43.185030  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:38:43.220218  207930 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:43.220324  207930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:38:43.220402  207930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:38:43.248594  207930 cri.go:89] found id: ""
	I1009 19:38:43.248669  207930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:38:43.257055  207930 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:38:43.257079  207930 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:38:43.257130  207930 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:38:43.264639  207930 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:38:43.265056  207930 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:43.265186  207930 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-137890/kubeconfig needs updating (will repair): [kubeconfig missing "ha-898615" cluster setting kubeconfig missing "ha-898615" context setting]
	I1009 19:38:43.265530  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:43.266090  207930 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:38:43.266647  207930 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:38:43.266666  207930 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:38:43.266673  207930 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:38:43.266678  207930 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:38:43.266683  207930 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:38:43.266709  207930 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:38:43.267074  207930 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:38:43.275909  207930 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:38:43.275950  207930 kubeadm.go:601] duration metric: took 18.863916ms to restartPrimaryControlPlane
	I1009 19:38:43.275961  207930 kubeadm.go:402] duration metric: took 55.75684ms to StartCluster
	I1009 19:38:43.275983  207930 settings.go:142] acquiring lock: {Name:mk34466033fb866bc8e167d2d953624ad0802283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:43.276054  207930 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:43.276601  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:43.276813  207930 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:38:43.276876  207930 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:38:43.277003  207930 addons.go:69] Setting storage-provisioner=true in profile "ha-898615"
	I1009 19:38:43.277022  207930 addons.go:238] Setting addon storage-provisioner=true in "ha-898615"
	I1009 19:38:43.277040  207930 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:43.277064  207930 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:43.277014  207930 addons.go:69] Setting default-storageclass=true in profile "ha-898615"
	I1009 19:38:43.277117  207930 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-898615"
	I1009 19:38:43.277359  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:43.277562  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:43.280740  207930 out.go:179] * Verifying Kubernetes components...
	I1009 19:38:43.282030  207930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:43.297922  207930 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:38:43.297964  207930 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:38:43.298366  207930 addons.go:238] Setting addon default-storageclass=true in "ha-898615"
	I1009 19:38:43.298425  207930 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:43.298892  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:43.299418  207930 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:43.299443  207930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:38:43.299502  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:43.322859  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:43.328834  207930 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:38:43.328858  207930 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:38:43.328930  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:43.352463  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:43.401305  207930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:43.415541  207930 node_ready.go:35] waiting up to 6m0s for node "ha-898615" to be "Ready" ...
	I1009 19:38:43.436410  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:43.466671  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:43.496356  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.496416  207930 retry.go:31] will retry after 342.264655ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:43.525246  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.525289  207930 retry.go:31] will retry after 174.41945ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.701041  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:43.758027  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.758057  207930 retry.go:31] will retry after 209.535579ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.839271  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:43.896734  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.896774  207930 retry.go:31] will retry after 538.756932ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.968448  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:44.023415  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.023454  207930 retry.go:31] will retry after 556.953167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.436515  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:44.490407  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.490440  207930 retry.go:31] will retry after 711.386877ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.580632  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:44.634616  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.634648  207930 retry.go:31] will retry after 1.063862903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:45.202765  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:45.257625  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:45.257701  207930 retry.go:31] will retry after 1.231190246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:45.416376  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:45.698732  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:45.755705  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:45.755743  207930 retry.go:31] will retry after 975.429295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.489752  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:46.545290  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.545326  207930 retry.go:31] will retry after 1.502139969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.731733  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:46.787009  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.787042  207930 retry.go:31] will retry after 2.693302994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:47.416975  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:48.048320  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:48.103285  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:48.103320  207930 retry.go:31] will retry after 2.181453682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:49.480527  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:49.538700  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:49.538734  207930 retry.go:31] will retry after 4.218840209s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:49.916480  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:50.284976  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:50.341540  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:50.341577  207930 retry.go:31] will retry after 1.691103888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:52.033656  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:52.088485  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:52.088524  207930 retry.go:31] will retry after 2.514845713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:52.416328  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:53.758082  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:53.814997  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:53.815038  207930 retry.go:31] will retry after 5.532299656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:54.416935  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:54.604251  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:54.659736  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:54.659775  207930 retry.go:31] will retry after 3.993767117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:56.916955  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:58.654616  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:58.713410  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:58.713444  207930 retry.go:31] will retry after 9.568142224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:59.347766  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:59.404337  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:59.404389  207930 retry.go:31] will retry after 6.225933732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:59.417079  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:01.916592  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:03.916927  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:05.630497  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:05.685477  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:05.685519  207930 retry.go:31] will retry after 12.822953608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:06.417252  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:08.282818  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:39:08.337692  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:08.337725  207930 retry.go:31] will retry after 7.236832581s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:08.916334  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:10.916501  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:12.917166  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:15.416556  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:15.574832  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:39:15.630769  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:15.630807  207930 retry.go:31] will retry after 32.093842325s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:17.417148  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:18.509437  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:18.569821  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:18.569859  207930 retry.go:31] will retry after 8.204907126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:19.917021  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:22.416351  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:24.416692  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:26.775723  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:26.830536  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:26.830579  207930 retry.go:31] will retry after 15.287470649s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:26.916248  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:28.916363  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:30.916660  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:33.416644  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:35.916574  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:37.917054  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:40.416285  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:42.118997  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:42.177198  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:42.177233  207930 retry.go:31] will retry after 19.60601903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:42.417176  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:44.916569  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:46.916954  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:47.725338  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:39:47.781475  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:47.781505  207930 retry.go:31] will retry after 23.586099799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:49.416272  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:51.416491  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:53.416753  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:55.417076  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:57.916443  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:00.416299  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:40:01.784079  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:40:01.841854  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:40:01.842020  207930 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 19:40:02.416680  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:04.417047  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:06.917026  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:09.417084  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:40:11.367994  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:40:11.424607  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:40:11.424769  207930 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:40:11.426781  207930 out.go:179] * Enabled addons: 
	I1009 19:40:11.428422  207930 addons.go:514] duration metric: took 1m28.151542071s for enable addons: enabled=[]
	W1009 19:40:11.916694  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:14.416598  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:16.416881  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:18.916218  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:20.916269  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:22.916576  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:24.917209  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:27.416502  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:29.916343  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:31.916934  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:34.416282  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:36.416512  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:38.416629  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:40.916603  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:42.917115  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:44.917161  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:47.416324  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:49.416455  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:51.416819  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:53.916417  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:55.916617  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:57.917078  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:00.417118  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:02.916234  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:04.916660  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:07.417218  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:09.916259  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:11.916515  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:13.917072  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:16.416622  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:18.916522  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:21.416789  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:23.916210  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:26.416522  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:28.916461  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:31.416717  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:33.916099  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:36.416332  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:38.916209  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:40.917157  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:43.416729  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:45.916510  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:47.916984  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:50.416131  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:52.416502  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:54.416618  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:56.416678  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:58.916609  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:01.416745  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:03.416958  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:05.916865  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:07.917104  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:10.416213  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:12.416339  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:14.416628  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:16.416997  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:18.916238  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:20.917143  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:23.416620  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:25.916367  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:27.916596  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:30.416273  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:32.916182  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:34.916794  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:37.416177  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:39.916144  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:41.916475  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:44.416966  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:46.916415  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:49.416202  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:51.416497  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:53.416539  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:55.916219  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:58.416153  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:00.417148  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:02.916226  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:04.916621  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:07.416226  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:09.916189  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:11.916327  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:13.916488  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:15.917040  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:18.416211  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:20.916112  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:22.916253  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:24.916887  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:26.917148  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:29.417134  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:31.916366  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:33.916553  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:36.416538  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:38.416719  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:40.916669  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:42.916983  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:45.416239  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:47.916206  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:50.417144  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:52.917059  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:55.416176  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:57.916181  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:59.916956  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:02.416315  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:04.416660  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:06.416831  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:08.417094  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:10.916165  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:12.916294  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:14.916520  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:16.916888  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:19.416178  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:21.416435  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:23.416520  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:25.417135  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:27.916344  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:30.416306  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:32.416429  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:34.416783  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:36.916172  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:39.416191  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:41.416480  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:44:43.416121  207930 node_ready.go:38] duration metric: took 6m0.000528946s for node "ha-898615" to be "Ready" ...
	I1009 19:44:43.418255  207930 out.go:203] 
	W1009 19:44:43.419680  207930 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:44:43.419700  207930 out.go:285] * 
	* 
	W1009 19:44:43.421462  207930 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:44:43.422822  207930 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-898615 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208124,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:38:36.214223554Z",
	            "FinishedAt": "2025-10-09T19:38:35.052463553Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed39874bd8a00ff4fec1cf869ad7e0f72bd903d36f0543c07f3bdadae1a02c8a",
	            "SandboxKey": "/var/run/docker/netns/ed39874bd8a0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:c8:1b:ff:df:39",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "f44155a867b0645d8a2be6662daaecc287c0af49551a1cb0ce8c095eaa3c9fd2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 2 (303.795784ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-898615 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml            │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- rollout status deployment/busybox                      │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node add --alsologtostderr -v 5                                   │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node stop m02 --alsologtostderr -v 5                              │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node start m02 --alsologtostderr -v 5                             │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node list --alsologtostderr -v 5                                  │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ stop    │ ha-898615 stop --alsologtostderr -v 5                                       │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ ha-898615 start --wait true --alsologtostderr -v 5                          │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ node    │ ha-898615 node list --alsologtostderr -v 5                                  │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:38:35
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:38:35.975523  207930 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:38:35.975809  207930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:35.975820  207930 out.go:374] Setting ErrFile to fd 2...
	I1009 19:38:35.975824  207930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:35.976017  207930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:38:35.976529  207930 out.go:368] Setting JSON to false
	I1009 19:38:35.977520  207930 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4865,"bootTime":1760033851,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:38:35.977618  207930 start.go:143] virtualization: kvm guest
	I1009 19:38:35.979911  207930 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:38:35.981312  207930 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:38:35.981323  207930 notify.go:221] Checking for updates...
	I1009 19:38:35.983929  207930 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:38:35.985330  207930 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:35.986909  207930 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:38:35.988196  207930 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:38:35.989553  207930 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:38:35.991338  207930 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:35.991495  207930 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:38:36.015602  207930 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:38:36.015757  207930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:36.075307  207930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:38:36.06526946 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:38:36.075425  207930 docker.go:319] overlay module found
	I1009 19:38:36.077367  207930 out.go:179] * Using the docker driver based on existing profile
	I1009 19:38:36.078862  207930 start.go:309] selected driver: docker
	I1009 19:38:36.078876  207930 start.go:930] validating driver "docker" against &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:36.078976  207930 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:38:36.079059  207930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:36.140960  207930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:38:36.131484248 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:38:36.141642  207930 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:38:36.141674  207930 cni.go:84] Creating CNI manager for ""
	I1009 19:38:36.141735  207930 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:38:36.141786  207930 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 19:38:36.143505  207930 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:38:36.144834  207930 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:38:36.146099  207930 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:38:36.147345  207930 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:38:36.147407  207930 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:38:36.147422  207930 cache.go:58] Caching tarball of preloaded images
	I1009 19:38:36.147438  207930 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:38:36.147532  207930 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:38:36.147545  207930 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:38:36.147660  207930 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:38:36.167793  207930 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:38:36.167815  207930 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:38:36.167836  207930 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:38:36.167869  207930 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:36.167944  207930 start.go:365] duration metric: took 50.923µs to acquireMachinesLock for "ha-898615"
	I1009 19:38:36.167966  207930 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:38:36.167995  207930 fix.go:55] fixHost starting: 
	I1009 19:38:36.168216  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:36.186209  207930 fix.go:113] recreateIfNeeded on ha-898615: state=Stopped err=<nil>
	W1009 19:38:36.186255  207930 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:38:36.188183  207930 out.go:252] * Restarting existing docker container for "ha-898615" ...
	I1009 19:38:36.188284  207930 cli_runner.go:164] Run: docker start ha-898615
	I1009 19:38:36.429165  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:36.449048  207930 kic.go:430] container "ha-898615" state is running.
	I1009 19:38:36.449470  207930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:36.468830  207930 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:38:36.469116  207930 machine.go:93] provisionDockerMachine start ...
	I1009 19:38:36.469193  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:36.488569  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:36.488848  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:36.488870  207930 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:38:36.489575  207930 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41826->127.0.0.1:32788: read: connection reset by peer
	I1009 19:38:39.637605  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:38:39.637634  207930 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:38:39.637693  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:39.655862  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:39.656140  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:39.656156  207930 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:38:39.812565  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:38:39.812645  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:39.831046  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:39.831304  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:39.831326  207930 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:38:39.979591  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:38:39.979628  207930 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:38:39.979662  207930 ubuntu.go:190] setting up certificates
	I1009 19:38:39.979675  207930 provision.go:84] configureAuth start
	I1009 19:38:39.979738  207930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:39.997703  207930 provision.go:143] copyHostCerts
	I1009 19:38:39.997746  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:38:39.997777  207930 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:38:39.997806  207930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:38:39.997879  207930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:38:39.997970  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:38:39.997989  207930 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:38:39.997996  207930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:38:39.998029  207930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:38:39.998077  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:38:39.998096  207930 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:38:39.998102  207930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:38:39.998125  207930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:38:39.998178  207930 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:38:40.329941  207930 provision.go:177] copyRemoteCerts
	I1009 19:38:40.330005  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:38:40.330048  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.348609  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:40.453024  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:38:40.453090  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:38:40.471037  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:38:40.471100  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:38:40.488791  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:38:40.488882  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:38:40.506540  207930 provision.go:87] duration metric: took 526.848912ms to configureAuth
	I1009 19:38:40.506573  207930 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:38:40.506763  207930 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:40.506890  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.524930  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:40.525160  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:40.525178  207930 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:38:40.786346  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:38:40.786372  207930 machine.go:96] duration metric: took 4.31723847s to provisionDockerMachine
	I1009 19:38:40.786407  207930 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:38:40.786419  207930 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:38:40.786479  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:38:40.786518  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.804162  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:40.908341  207930 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:38:40.911873  207930 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:38:40.911904  207930 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:38:40.911923  207930 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:38:40.911983  207930 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:38:40.912072  207930 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:38:40.912085  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:38:40.912183  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:38:40.919808  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:38:40.937277  207930 start.go:297] duration metric: took 150.853989ms for postStartSetup
	I1009 19:38:40.937349  207930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:38:40.937424  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.955872  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:41.055635  207930 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:38:41.060222  207930 fix.go:57] duration metric: took 4.892219254s for fixHost
	I1009 19:38:41.060252  207930 start.go:84] releasing machines lock for "ha-898615", held for 4.892295934s
	I1009 19:38:41.060315  207930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:41.078135  207930 ssh_runner.go:195] Run: cat /version.json
	I1009 19:38:41.078202  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:41.078238  207930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:38:41.078301  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:41.096227  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:41.096500  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:41.197266  207930 ssh_runner.go:195] Run: systemctl --version
	I1009 19:38:41.254878  207930 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:38:41.291881  207930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:38:41.296935  207930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:38:41.297063  207930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:38:41.305687  207930 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:38:41.305714  207930 start.go:496] detecting cgroup driver to use...
	I1009 19:38:41.305778  207930 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:38:41.305833  207930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:38:41.320848  207930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:38:41.334341  207930 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:38:41.334430  207930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:38:41.350433  207930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:38:41.364693  207930 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:38:41.444310  207930 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:38:41.525534  207930 docker.go:234] disabling docker service ...
	I1009 19:38:41.525603  207930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:38:41.540323  207930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:38:41.553168  207930 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:38:41.632212  207930 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:38:41.711096  207930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:38:41.724923  207930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:38:41.740807  207930 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:38:41.740860  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.750143  207930 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:38:41.750201  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.759647  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.768954  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.778411  207930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:38:41.786985  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.796139  207930 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.804565  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.813340  207930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:38:41.821627  207930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:38:41.829434  207930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:41.907787  207930 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:38:42.015071  207930 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:38:42.015128  207930 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:38:42.019190  207930 start.go:564] Will wait 60s for crictl version
	I1009 19:38:42.019246  207930 ssh_runner.go:195] Run: which crictl
	I1009 19:38:42.022757  207930 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:38:42.047602  207930 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:38:42.047669  207930 ssh_runner.go:195] Run: crio --version
	I1009 19:38:42.076709  207930 ssh_runner.go:195] Run: crio --version
	I1009 19:38:42.108280  207930 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:38:42.109626  207930 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:38:42.127160  207930 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:38:42.131748  207930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:38:42.142508  207930 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:38:42.142654  207930 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:38:42.142740  207930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:38:42.176610  207930 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:38:42.176633  207930 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:38:42.176682  207930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:38:42.202986  207930 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:38:42.203009  207930 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:38:42.203021  207930 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:38:42.203143  207930 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:38:42.203236  207930 ssh_runner.go:195] Run: crio config
	I1009 19:38:42.252192  207930 cni.go:84] Creating CNI manager for ""
	I1009 19:38:42.252222  207930 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:38:42.252244  207930 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:38:42.252274  207930 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:38:42.252455  207930 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:38:42.252533  207930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:38:42.261297  207930 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:38:42.261376  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:38:42.269740  207930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:38:42.283297  207930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:38:42.296911  207930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:38:42.310550  207930 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:38:42.314737  207930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:38:42.325511  207930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:42.406041  207930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:42.431169  207930 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:38:42.431199  207930 certs.go:195] generating shared ca certs ...
	I1009 19:38:42.431223  207930 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:42.431407  207930 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:38:42.431466  207930 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:38:42.431481  207930 certs.go:257] generating profile certs ...
	I1009 19:38:42.431609  207930 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:38:42.431640  207930 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd
	I1009 19:38:42.431668  207930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 19:38:42.592908  207930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd ...
	I1009 19:38:42.592943  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd: {Name:mkbeae8ef9cb7280e84a8eafb5e4ed5a9f929f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:42.593120  207930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd ...
	I1009 19:38:42.593133  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd: {Name:mk2a0878011f7339a4c02515e180398732017ed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:42.593209  207930 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:38:42.593374  207930 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:38:42.593552  207930 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:38:42.593571  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:38:42.593584  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:38:42.593597  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:38:42.593608  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:38:42.593621  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:38:42.593631  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:38:42.593644  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:38:42.593653  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:38:42.593711  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:38:42.593739  207930 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:38:42.593749  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:38:42.593769  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:38:42.593790  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:38:42.593810  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:38:42.593855  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:38:42.593880  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.593893  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.593905  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.594443  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:38:42.612666  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:38:42.632698  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:38:42.651700  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:38:42.670307  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 19:38:42.688542  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:38:42.707425  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:38:42.725187  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:38:42.743982  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:38:42.762241  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:38:42.780038  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:38:42.798203  207930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:38:42.811431  207930 ssh_runner.go:195] Run: openssl version
	I1009 19:38:42.818209  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:38:42.827928  207930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.831986  207930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.832055  207930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.866756  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:38:42.875613  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:38:42.884840  207930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.888808  207930 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.888867  207930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.923046  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:38:42.932828  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:38:42.943293  207930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.948839  207930 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.948923  207930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.995613  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:38:43.005104  207930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:38:43.009159  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:38:43.044274  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:38:43.079173  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:38:43.114440  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:38:43.149740  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:38:43.185030  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:38:43.220218  207930 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:43.220324  207930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:38:43.220402  207930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:38:43.248594  207930 cri.go:89] found id: ""
	I1009 19:38:43.248669  207930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:38:43.257055  207930 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:38:43.257079  207930 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:38:43.257130  207930 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:38:43.264639  207930 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:38:43.265056  207930 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:43.265186  207930 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-137890/kubeconfig needs updating (will repair): [kubeconfig missing "ha-898615" cluster setting kubeconfig missing "ha-898615" context setting]
	I1009 19:38:43.265530  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:43.266090  207930 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:38:43.266647  207930 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:38:43.266666  207930 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:38:43.266673  207930 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:38:43.266678  207930 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:38:43.266683  207930 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:38:43.266709  207930 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:38:43.267074  207930 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:38:43.275909  207930 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:38:43.275950  207930 kubeadm.go:601] duration metric: took 18.863916ms to restartPrimaryControlPlane
	I1009 19:38:43.275961  207930 kubeadm.go:402] duration metric: took 55.75684ms to StartCluster
	I1009 19:38:43.275983  207930 settings.go:142] acquiring lock: {Name:mk34466033fb866bc8e167d2d953624ad0802283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:43.276054  207930 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:43.276601  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:43.276813  207930 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:38:43.276876  207930 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:38:43.277003  207930 addons.go:69] Setting storage-provisioner=true in profile "ha-898615"
	I1009 19:38:43.277022  207930 addons.go:238] Setting addon storage-provisioner=true in "ha-898615"
	I1009 19:38:43.277040  207930 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:43.277064  207930 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:43.277014  207930 addons.go:69] Setting default-storageclass=true in profile "ha-898615"
	I1009 19:38:43.277117  207930 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-898615"
	I1009 19:38:43.277359  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:43.277562  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:43.280740  207930 out.go:179] * Verifying Kubernetes components...
	I1009 19:38:43.282030  207930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:43.297922  207930 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:38:43.297964  207930 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:38:43.298366  207930 addons.go:238] Setting addon default-storageclass=true in "ha-898615"
	I1009 19:38:43.298425  207930 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:43.298892  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:43.299418  207930 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:43.299443  207930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:38:43.299502  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:43.322859  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:43.328834  207930 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:38:43.328858  207930 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:38:43.328930  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:43.352463  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:43.401305  207930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:43.415541  207930 node_ready.go:35] waiting up to 6m0s for node "ha-898615" to be "Ready" ...
	I1009 19:38:43.436410  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:43.466671  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:43.496356  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.496416  207930 retry.go:31] will retry after 342.264655ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:43.525246  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.525289  207930 retry.go:31] will retry after 174.41945ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.701041  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:43.758027  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.758057  207930 retry.go:31] will retry after 209.535579ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.839271  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:43.896734  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.896774  207930 retry.go:31] will retry after 538.756932ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.968448  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:44.023415  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.023454  207930 retry.go:31] will retry after 556.953167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.436515  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:44.490407  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.490440  207930 retry.go:31] will retry after 711.386877ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.580632  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:44.634616  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.634648  207930 retry.go:31] will retry after 1.063862903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:45.202765  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:45.257625  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:45.257701  207930 retry.go:31] will retry after 1.231190246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:45.416376  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:45.698732  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:45.755705  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:45.755743  207930 retry.go:31] will retry after 975.429295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.489752  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:46.545290  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.545326  207930 retry.go:31] will retry after 1.502139969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.731733  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:46.787009  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.787042  207930 retry.go:31] will retry after 2.693302994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:47.416975  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:48.048320  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:48.103285  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:48.103320  207930 retry.go:31] will retry after 2.181453682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:49.480527  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:49.538700  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:49.538734  207930 retry.go:31] will retry after 4.218840209s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:49.916480  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:50.284976  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:50.341540  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:50.341577  207930 retry.go:31] will retry after 1.691103888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:52.033656  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:52.088485  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:52.088524  207930 retry.go:31] will retry after 2.514845713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:52.416328  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:53.758082  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:53.814997  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:53.815038  207930 retry.go:31] will retry after 5.532299656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:54.416935  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:54.604251  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:54.659736  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:54.659775  207930 retry.go:31] will retry after 3.993767117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:56.916955  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:58.654616  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:58.713410  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:58.713444  207930 retry.go:31] will retry after 9.568142224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:59.347766  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:59.404337  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:59.404389  207930 retry.go:31] will retry after 6.225933732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:59.417079  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:01.916592  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:03.916927  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:05.630497  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:05.685477  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:05.685519  207930 retry.go:31] will retry after 12.822953608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:06.417252  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:08.282818  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:39:08.337692  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:08.337725  207930 retry.go:31] will retry after 7.236832581s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:08.916334  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:10.916501  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:12.917166  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:15.416556  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:15.574832  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:39:15.630769  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:15.630807  207930 retry.go:31] will retry after 32.093842325s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:17.417148  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:18.509437  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:18.569821  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:18.569859  207930 retry.go:31] will retry after 8.204907126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:19.917021  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:22.416351  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:24.416692  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:26.775723  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:26.830536  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:26.830579  207930 retry.go:31] will retry after 15.287470649s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:26.916248  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:28.916363  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:30.916660  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:33.416644  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:35.916574  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:37.917054  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:40.416285  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:42.118997  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:42.177198  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:42.177233  207930 retry.go:31] will retry after 19.60601903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:42.417176  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:44.916569  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:46.916954  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:47.725338  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:39:47.781475  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:47.781505  207930 retry.go:31] will retry after 23.586099799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:49.416272  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:51.416491  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:53.416753  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:55.417076  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:57.916443  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:00.416299  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:40:01.784079  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:40:01.841854  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:40:01.842020  207930 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 19:40:02.416680  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:04.417047  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:06.917026  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:09.417084  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:40:11.367994  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:40:11.424607  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:40:11.424769  207930 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:40:11.426781  207930 out.go:179] * Enabled addons: 
	I1009 19:40:11.428422  207930 addons.go:514] duration metric: took 1m28.151542071s for enable addons: enabled=[]
	W1009 19:40:11.916694  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:14.416598  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:16.416881  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:18.916218  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:20.916269  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:22.916576  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:24.917209  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:27.416502  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:29.916343  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:31.916934  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:34.416282  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:36.416512  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:38.416629  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:40.916603  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:42.917115  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:44.917161  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:47.416324  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:49.416455  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:51.416819  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:53.916417  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:55.916617  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:57.917078  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:00.417118  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:02.916234  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:04.916660  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:07.417218  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:09.916259  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:11.916515  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:13.917072  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:16.416622  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:18.916522  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:21.416789  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:23.916210  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:26.416522  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:28.916461  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:31.416717  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:33.916099  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:36.416332  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:38.916209  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:40.917157  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:43.416729  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:45.916510  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:47.916984  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:50.416131  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:52.416502  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:54.416618  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:56.416678  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:58.916609  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:01.416745  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:03.416958  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:05.916865  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:07.917104  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:10.416213  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:12.416339  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:14.416628  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:16.416997  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:18.916238  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:20.917143  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:23.416620  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:25.916367  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:27.916596  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:30.416273  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:32.916182  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:34.916794  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:37.416177  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:39.916144  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:41.916475  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:44.416966  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:46.916415  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:49.416202  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:51.416497  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:53.416539  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:55.916219  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:58.416153  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:00.417148  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:02.916226  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:04.916621  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:07.416226  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:09.916189  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:11.916327  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:13.916488  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:15.917040  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:18.416211  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:20.916112  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:22.916253  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:24.916887  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:26.917148  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:29.417134  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:31.916366  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:33.916553  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:36.416538  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:38.416719  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:40.916669  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:42.916983  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:45.416239  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:47.916206  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:50.417144  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:52.917059  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:55.416176  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:57.916181  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:59.916956  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:02.416315  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:04.416660  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:06.416831  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:08.417094  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:10.916165  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:12.916294  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:14.916520  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:16.916888  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:19.416178  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:21.416435  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:23.416520  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:25.417135  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:27.916344  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:30.416306  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:32.416429  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:34.416783  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:36.916172  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:39.416191  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:41.416480  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:44:43.416121  207930 node_ready.go:38] duration metric: took 6m0.000528946s for node "ha-898615" to be "Ready" ...
	I1009 19:44:43.418255  207930 out.go:203] 
	W1009 19:44:43.419680  207930 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:44:43.419700  207930 out.go:285] * 
	W1009 19:44:43.421462  207930 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:44:43.422822  207930 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:44:34 ha-898615 crio[519]: time="2025-10-09T19:44:34.549971848Z" level=info msg="createCtr: removing container fc426c179253de3c5286d6424aa571281a653237ad6481a740cca377e8b5a7a0" id=9b651ebc-2874-41fc-9b26-759216cb0f18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:34 ha-898615 crio[519]: time="2025-10-09T19:44:34.550007454Z" level=info msg="createCtr: deleting container fc426c179253de3c5286d6424aa571281a653237ad6481a740cca377e8b5a7a0 from storage" id=9b651ebc-2874-41fc-9b26-759216cb0f18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:34 ha-898615 crio[519]: time="2025-10-09T19:44:34.552201133Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=9b651ebc-2874-41fc-9b26-759216cb0f18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.52508731Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b5661c60-1300-4632-aaf7-bdeeac864bc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.526055709Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=1725e831-dbda-472e-986a-63cc1de3e757 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.527117412Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-898615/kube-apiserver" id=899ed538-a4e2-4701-8914-e39461a177f4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.5273638Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.530915074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.531527799Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.545578816Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=899ed538-a4e2-4701-8914-e39461a177f4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.54700936Z" level=info msg="createCtr: deleting container ID 9bc0044ce65cd09f8ebcf2e321d035b54d500f0a1a35b9b784ebc22754281edd from idIndex" id=899ed538-a4e2-4701-8914-e39461a177f4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.547050058Z" level=info msg="createCtr: removing container 9bc0044ce65cd09f8ebcf2e321d035b54d500f0a1a35b9b784ebc22754281edd" id=899ed538-a4e2-4701-8914-e39461a177f4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.54708322Z" level=info msg="createCtr: deleting container 9bc0044ce65cd09f8ebcf2e321d035b54d500f0a1a35b9b784ebc22754281edd from storage" id=899ed538-a4e2-4701-8914-e39461a177f4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.549281769Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-898615_kube-system_aad558bc9f2efde8d3c90d373798de18_0" id=899ed538-a4e2-4701-8914-e39461a177f4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.52406925Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=e3528edc-b46b-4c43-b502-1584b9f192b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.525117742Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=87b20685-79e2-458d-af66-e89814136ec1 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.526139813Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-898615/kube-controller-manager" id=35169ac7-542c-4f30-a813-a1ee3b7a8393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.526359081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.529823317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.530449796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.54648838Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=35169ac7-542c-4f30-a813-a1ee3b7a8393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.547848534Z" level=info msg="createCtr: deleting container ID 0d518b35fc4962587c6c0c28fd540b49da12c05e477b6cfa87ac0ac0ee9fcc9b from idIndex" id=35169ac7-542c-4f30-a813-a1ee3b7a8393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.547885787Z" level=info msg="createCtr: removing container 0d518b35fc4962587c6c0c28fd540b49da12c05e477b6cfa87ac0ac0ee9fcc9b" id=35169ac7-542c-4f30-a813-a1ee3b7a8393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.547920392Z" level=info msg="createCtr: deleting container 0d518b35fc4962587c6c0c28fd540b49da12c05e477b6cfa87ac0ac0ee9fcc9b from storage" id=35169ac7-542c-4f30-a813-a1ee3b7a8393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.550180502Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=35169ac7-542c-4f30-a813-a1ee3b7a8393 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:44:44.415941    2001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:44:44.416540    2001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:44:44.418217    2001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:44:44.419156    2001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:44:44.419985    2001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:44:44 up  1:27,  0 user,  load average: 0.03, 0.06, 1.10
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:44:34 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:44:34 ha-898615 kubelet[666]: E1009 19:44:34.552683     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-898615" podUID="2d43b86ac03e93e11f35c923e2560103"
	Oct 09 19:44:36 ha-898615 kubelet[666]: E1009 19:44:36.524574     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:44:36 ha-898615 kubelet[666]: E1009 19:44:36.549624     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:44:36 ha-898615 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:44:36 ha-898615 kubelet[666]:  > podSandboxID="6cf5e76a918ecf34d99855ac661a1e6984a2f2e13969711afc706e556815ec7b"
	Oct 09 19:44:36 ha-898615 kubelet[666]: E1009 19:44:36.549752     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:44:36 ha-898615 kubelet[666]:         container kube-apiserver start failed in pod kube-apiserver-ha-898615_kube-system(aad558bc9f2efde8d3c90d373798de18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:44:36 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:44:36 ha-898615 kubelet[666]: E1009 19:44:36.549787     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-898615" podUID="aad558bc9f2efde8d3c90d373798de18"
	Oct 09 19:44:37 ha-898615 kubelet[666]: E1009 19:44:37.263207     666 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 19:44:38 ha-898615 kubelet[666]: E1009 19:44:38.168243     666 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:44:38 ha-898615 kubelet[666]: I1009 19:44:38.339599     666 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:44:38 ha-898615 kubelet[666]: E1009 19:44:38.340003     666 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:44:39 ha-898615 kubelet[666]: E1009 19:44:39.614942     666 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-898615&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 09 19:44:40 ha-898615 kubelet[666]: E1009 19:44:40.477952     666 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-898615.186ce9e49e54e3dd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-898615,UID:ha-898615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-898615 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-898615,},FirstTimestamp:2025-10-09 19:38:42.513200093 +0000 UTC m=+0.079441148,LastTimestamp:2025-10-09 19:38:42.513200093 +0000 UTC m=+0.079441148,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-898615,}"
	Oct 09 19:44:40 ha-898615 kubelet[666]: E1009 19:44:40.523603     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:44:40 ha-898615 kubelet[666]: E1009 19:44:40.550531     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:44:40 ha-898615 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:44:40 ha-898615 kubelet[666]:  > podSandboxID="6b25866fd5abc60bd238bd9a662548c51d322e9ed30360455db0617325fb150e"
	Oct 09 19:44:40 ha-898615 kubelet[666]: E1009 19:44:40.550644     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:44:40 ha-898615 kubelet[666]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:44:40 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:44:40 ha-898615 kubelet[666]: E1009 19:44:40.550675     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	Oct 09 19:44:42 ha-898615 kubelet[666]: E1009 19:44:42.539969     666 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 2 (300.543009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (1.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 node delete m03 --alsologtostderr -v 5: exit status 103 (255.610796ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-898615 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-898615"

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:44:44.861923  211987 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:44:44.862206  211987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:44.862216  211987 out.go:374] Setting ErrFile to fd 2...
	I1009 19:44:44.862234  211987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:44.862422  211987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:44:44.862723  211987 mustload.go:65] Loading cluster: ha-898615
	I1009 19:44:44.863057  211987 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:44.863435  211987 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:44.880519  211987 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:44:44.880797  211987 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:44:44.941587  211987 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:44:44.930924953 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:44:44.941711  211987 api_server.go:166] Checking apiserver status ...
	I1009 19:44:44.941754  211987 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:44:44.941786  211987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:44.959734  211987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	W1009 19:44:45.064850  211987 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:44:45.066748  211987 out.go:179] * The control-plane node ha-898615 apiserver is not running: (state=Stopped)
	I1009 19:44:45.068337  211987 out.go:179]   To start a cluster, run: "minikube start -p ha-898615"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-amd64 -p ha-898615 node delete m03 --alsologtostderr -v 5": exit status 103
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5: exit status 2 (299.660983ms)

                                                
                                                
-- stdout --
	ha-898615
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:44:45.119698  212099 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:44:45.120000  212099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:45.120012  212099 out.go:374] Setting ErrFile to fd 2...
	I1009 19:44:45.120017  212099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:45.120230  212099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:44:45.120437  212099 out.go:368] Setting JSON to false
	I1009 19:44:45.120469  212099 mustload.go:65] Loading cluster: ha-898615
	I1009 19:44:45.120600  212099 notify.go:221] Checking for updates...
	I1009 19:44:45.120989  212099 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:45.121012  212099 status.go:174] checking status of ha-898615 ...
	I1009 19:44:45.121684  212099 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:45.139516  212099 status.go:371] ha-898615 host status = "Running" (err=<nil>)
	I1009 19:44:45.139550  212099 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:44:45.139826  212099 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:45.157245  212099 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:44:45.157543  212099 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:44:45.157590  212099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:45.175851  212099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:45.276728  212099 ssh_runner.go:195] Run: systemctl --version
	I1009 19:44:45.283342  212099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:44:45.296685  212099 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:44:45.355697  212099 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:44:45.346120272 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:44:45.356222  212099 kubeconfig.go:125] found "ha-898615" server: "https://192.168.49.2:8443"
	I1009 19:44:45.356256  212099 api_server.go:166] Checking apiserver status ...
	I1009 19:44:45.356292  212099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 19:44:45.367402  212099 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:44:45.367423  212099 status.go:463] ha-898615 apiserver status = Running (err=<nil>)
	I1009 19:44:45.367435  212099 status.go:176] ha-898615 status: &{Name:ha-898615 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208124,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:38:36.214223554Z",
	            "FinishedAt": "2025-10-09T19:38:35.052463553Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed39874bd8a00ff4fec1cf869ad7e0f72bd903d36f0543c07f3bdadae1a02c8a",
	            "SandboxKey": "/var/run/docker/netns/ed39874bd8a0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:c8:1b:ff:df:39",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "f44155a867b0645d8a2be6662daaecc287c0af49551a1cb0ce8c095eaa3c9fd2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 2 (299.378988ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-898615 kubectl -- rollout status deployment/busybox                      │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node add --alsologtostderr -v 5                                   │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node stop m02 --alsologtostderr -v 5                              │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node start m02 --alsologtostderr -v 5                             │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node list --alsologtostderr -v 5                                  │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ stop    │ ha-898615 stop --alsologtostderr -v 5                                       │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ ha-898615 start --wait true --alsologtostderr -v 5                          │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ node    │ ha-898615 node list --alsologtostderr -v 5                                  │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	│ node    │ ha-898615 node delete m03 --alsologtostderr -v 5                            │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:38:35
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:38:35.975523  207930 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:38:35.975809  207930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:35.975820  207930 out.go:374] Setting ErrFile to fd 2...
	I1009 19:38:35.975824  207930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:35.976017  207930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:38:35.976529  207930 out.go:368] Setting JSON to false
	I1009 19:38:35.977520  207930 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4865,"bootTime":1760033851,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:38:35.977618  207930 start.go:143] virtualization: kvm guest
	I1009 19:38:35.979911  207930 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:38:35.981312  207930 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:38:35.981323  207930 notify.go:221] Checking for updates...
	I1009 19:38:35.983929  207930 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:38:35.985330  207930 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:35.986909  207930 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:38:35.988196  207930 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:38:35.989553  207930 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:38:35.991338  207930 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:35.991495  207930 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:38:36.015602  207930 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:38:36.015757  207930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:36.075307  207930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:38:36.06526946 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:38:36.075425  207930 docker.go:319] overlay module found
	I1009 19:38:36.077367  207930 out.go:179] * Using the docker driver based on existing profile
	I1009 19:38:36.078862  207930 start.go:309] selected driver: docker
	I1009 19:38:36.078876  207930 start.go:930] validating driver "docker" against &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:36.078976  207930 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:38:36.079059  207930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:36.140960  207930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:38:36.131484248 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:38:36.141642  207930 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:38:36.141674  207930 cni.go:84] Creating CNI manager for ""
	I1009 19:38:36.141735  207930 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:38:36.141786  207930 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 19:38:36.143505  207930 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:38:36.144834  207930 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:38:36.146099  207930 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:38:36.147345  207930 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:38:36.147407  207930 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:38:36.147422  207930 cache.go:58] Caching tarball of preloaded images
	I1009 19:38:36.147438  207930 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:38:36.147532  207930 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:38:36.147545  207930 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:38:36.147660  207930 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:38:36.167793  207930 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:38:36.167815  207930 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:38:36.167836  207930 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:38:36.167869  207930 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:36.167944  207930 start.go:365] duration metric: took 50.923µs to acquireMachinesLock for "ha-898615"
	I1009 19:38:36.167966  207930 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:38:36.167995  207930 fix.go:55] fixHost starting: 
	I1009 19:38:36.168216  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:36.186209  207930 fix.go:113] recreateIfNeeded on ha-898615: state=Stopped err=<nil>
	W1009 19:38:36.186255  207930 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:38:36.188183  207930 out.go:252] * Restarting existing docker container for "ha-898615" ...
	I1009 19:38:36.188284  207930 cli_runner.go:164] Run: docker start ha-898615
	I1009 19:38:36.429165  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:36.449048  207930 kic.go:430] container "ha-898615" state is running.
	I1009 19:38:36.449470  207930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:36.468830  207930 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:38:36.469116  207930 machine.go:93] provisionDockerMachine start ...
	I1009 19:38:36.469193  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:36.488569  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:36.488848  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:36.488870  207930 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:38:36.489575  207930 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41826->127.0.0.1:32788: read: connection reset by peer
	I1009 19:38:39.637605  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:38:39.637634  207930 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:38:39.637693  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:39.655862  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:39.656140  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:39.656156  207930 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:38:39.812565  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:38:39.812645  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:39.831046  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:39.831304  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:39.831326  207930 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:38:39.979591  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:38:39.979628  207930 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:38:39.979662  207930 ubuntu.go:190] setting up certificates
	I1009 19:38:39.979675  207930 provision.go:84] configureAuth start
	I1009 19:38:39.979738  207930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:39.997703  207930 provision.go:143] copyHostCerts
	I1009 19:38:39.997746  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:38:39.997777  207930 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:38:39.997806  207930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:38:39.997879  207930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:38:39.997970  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:38:39.997989  207930 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:38:39.997996  207930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:38:39.998029  207930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:38:39.998077  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:38:39.998096  207930 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:38:39.998102  207930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:38:39.998125  207930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:38:39.998178  207930 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:38:40.329941  207930 provision.go:177] copyRemoteCerts
	I1009 19:38:40.330005  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:38:40.330048  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.348609  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:40.453024  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:38:40.453090  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:38:40.471037  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:38:40.471100  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:38:40.488791  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:38:40.488882  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:38:40.506540  207930 provision.go:87] duration metric: took 526.848912ms to configureAuth
	I1009 19:38:40.506573  207930 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:38:40.506763  207930 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:40.506890  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.524930  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:40.525160  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:40.525178  207930 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:38:40.786346  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:38:40.786372  207930 machine.go:96] duration metric: took 4.31723847s to provisionDockerMachine
	I1009 19:38:40.786407  207930 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:38:40.786419  207930 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:38:40.786479  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:38:40.786518  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.804162  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:40.908341  207930 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:38:40.911873  207930 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:38:40.911904  207930 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:38:40.911923  207930 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:38:40.911983  207930 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:38:40.912072  207930 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:38:40.912085  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:38:40.912183  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:38:40.919808  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:38:40.937277  207930 start.go:297] duration metric: took 150.853989ms for postStartSetup
	I1009 19:38:40.937349  207930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:38:40.937424  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.955872  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:41.055635  207930 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:38:41.060222  207930 fix.go:57] duration metric: took 4.892219254s for fixHost
	I1009 19:38:41.060252  207930 start.go:84] releasing machines lock for "ha-898615", held for 4.892295934s
	I1009 19:38:41.060315  207930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:41.078135  207930 ssh_runner.go:195] Run: cat /version.json
	I1009 19:38:41.078202  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:41.078238  207930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:38:41.078301  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:41.096227  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:41.096500  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:41.197266  207930 ssh_runner.go:195] Run: systemctl --version
	I1009 19:38:41.254878  207930 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:38:41.291881  207930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:38:41.296935  207930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:38:41.297063  207930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:38:41.305687  207930 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:38:41.305714  207930 start.go:496] detecting cgroup driver to use...
	I1009 19:38:41.305778  207930 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:38:41.305833  207930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:38:41.320848  207930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:38:41.334341  207930 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:38:41.334430  207930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:38:41.350433  207930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:38:41.364693  207930 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:38:41.444310  207930 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:38:41.525534  207930 docker.go:234] disabling docker service ...
	I1009 19:38:41.525603  207930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:38:41.540323  207930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:38:41.553168  207930 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:38:41.632212  207930 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:38:41.711096  207930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:38:41.724923  207930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:38:41.740807  207930 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:38:41.740860  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.750143  207930 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:38:41.750201  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.759647  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.768954  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.778411  207930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:38:41.786985  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.796139  207930 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.804565  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.813340  207930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:38:41.821627  207930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:38:41.829434  207930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:41.907787  207930 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:38:42.015071  207930 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:38:42.015128  207930 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:38:42.019190  207930 start.go:564] Will wait 60s for crictl version
	I1009 19:38:42.019246  207930 ssh_runner.go:195] Run: which crictl
	I1009 19:38:42.022757  207930 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:38:42.047602  207930 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:38:42.047669  207930 ssh_runner.go:195] Run: crio --version
	I1009 19:38:42.076709  207930 ssh_runner.go:195] Run: crio --version
	I1009 19:38:42.108280  207930 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:38:42.109626  207930 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:38:42.127160  207930 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:38:42.131748  207930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:38:42.142508  207930 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:38:42.142654  207930 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:38:42.142740  207930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:38:42.176610  207930 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:38:42.176633  207930 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:38:42.176682  207930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:38:42.202986  207930 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:38:42.203009  207930 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:38:42.203021  207930 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:38:42.203143  207930 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:38:42.203236  207930 ssh_runner.go:195] Run: crio config
	I1009 19:38:42.252192  207930 cni.go:84] Creating CNI manager for ""
	I1009 19:38:42.252222  207930 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:38:42.252244  207930 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:38:42.252274  207930 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:38:42.252455  207930 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:38:42.252533  207930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:38:42.261297  207930 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:38:42.261376  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:38:42.269740  207930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:38:42.283297  207930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:38:42.296911  207930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:38:42.310550  207930 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:38:42.314737  207930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:38:42.325511  207930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:42.406041  207930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:42.431169  207930 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:38:42.431199  207930 certs.go:195] generating shared ca certs ...
	I1009 19:38:42.431223  207930 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:42.431407  207930 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:38:42.431466  207930 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:38:42.431481  207930 certs.go:257] generating profile certs ...
	I1009 19:38:42.431609  207930 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:38:42.431640  207930 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd
	I1009 19:38:42.431668  207930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 19:38:42.592908  207930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd ...
	I1009 19:38:42.592943  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd: {Name:mkbeae8ef9cb7280e84a8eafb5e4ed5a9f929f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:42.593120  207930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd ...
	I1009 19:38:42.593133  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd: {Name:mk2a0878011f7339a4c02515e180398732017ed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:42.593209  207930 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:38:42.593374  207930 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:38:42.593552  207930 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:38:42.593571  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:38:42.593584  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:38:42.593597  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:38:42.593608  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:38:42.593621  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:38:42.593631  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:38:42.593644  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:38:42.593653  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:38:42.593711  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:38:42.593739  207930 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:38:42.593749  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:38:42.593769  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:38:42.593790  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:38:42.593810  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:38:42.593855  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:38:42.593880  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.593893  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.593905  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.594443  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:38:42.612666  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:38:42.632698  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:38:42.651700  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:38:42.670307  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 19:38:42.688542  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:38:42.707425  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:38:42.725187  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:38:42.743982  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:38:42.762241  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:38:42.780038  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:38:42.798203  207930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:38:42.811431  207930 ssh_runner.go:195] Run: openssl version
	I1009 19:38:42.818209  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:38:42.827928  207930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.831986  207930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.832055  207930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.866756  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:38:42.875613  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:38:42.884840  207930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.888808  207930 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.888867  207930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.923046  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:38:42.932828  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:38:42.943293  207930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.948839  207930 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.948923  207930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.995613  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:38:43.005104  207930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:38:43.009159  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:38:43.044274  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:38:43.079173  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:38:43.114440  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:38:43.149740  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:38:43.185030  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:38:43.220218  207930 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:43.220324  207930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:38:43.220402  207930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:38:43.248594  207930 cri.go:89] found id: ""
	I1009 19:38:43.248669  207930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:38:43.257055  207930 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:38:43.257079  207930 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:38:43.257130  207930 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:38:43.264639  207930 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:38:43.265056  207930 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:43.265186  207930 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-137890/kubeconfig needs updating (will repair): [kubeconfig missing "ha-898615" cluster setting kubeconfig missing "ha-898615" context setting]
	I1009 19:38:43.265530  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:43.266090  207930 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:38:43.266647  207930 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:38:43.266666  207930 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:38:43.266673  207930 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:38:43.266678  207930 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:38:43.266683  207930 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:38:43.266709  207930 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:38:43.267074  207930 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:38:43.275909  207930 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:38:43.275950  207930 kubeadm.go:601] duration metric: took 18.863916ms to restartPrimaryControlPlane
	I1009 19:38:43.275961  207930 kubeadm.go:402] duration metric: took 55.75684ms to StartCluster
	I1009 19:38:43.275983  207930 settings.go:142] acquiring lock: {Name:mk34466033fb866bc8e167d2d953624ad0802283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:43.276054  207930 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:43.276601  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:43.276813  207930 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:38:43.276876  207930 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:38:43.277003  207930 addons.go:69] Setting storage-provisioner=true in profile "ha-898615"
	I1009 19:38:43.277022  207930 addons.go:238] Setting addon storage-provisioner=true in "ha-898615"
	I1009 19:38:43.277040  207930 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:43.277064  207930 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:43.277014  207930 addons.go:69] Setting default-storageclass=true in profile "ha-898615"
	I1009 19:38:43.277117  207930 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-898615"
	I1009 19:38:43.277359  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:43.277562  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:43.280740  207930 out.go:179] * Verifying Kubernetes components...
	I1009 19:38:43.282030  207930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:43.297922  207930 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:38:43.297964  207930 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:38:43.298366  207930 addons.go:238] Setting addon default-storageclass=true in "ha-898615"
	I1009 19:38:43.298425  207930 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:43.298892  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:43.299418  207930 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:43.299443  207930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:38:43.299502  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:43.322859  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:43.328834  207930 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:38:43.328858  207930 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:38:43.328930  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:43.352463  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:43.401305  207930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:43.415541  207930 node_ready.go:35] waiting up to 6m0s for node "ha-898615" to be "Ready" ...
	I1009 19:38:43.436410  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:43.466671  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:43.496356  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.496416  207930 retry.go:31] will retry after 342.264655ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:43.525246  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.525289  207930 retry.go:31] will retry after 174.41945ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.701041  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:43.758027  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.758057  207930 retry.go:31] will retry after 209.535579ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.839271  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:43.896734  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.896774  207930 retry.go:31] will retry after 538.756932ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.968448  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:44.023415  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.023454  207930 retry.go:31] will retry after 556.953167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.436515  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:44.490407  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.490440  207930 retry.go:31] will retry after 711.386877ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.580632  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:44.634616  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.634648  207930 retry.go:31] will retry after 1.063862903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:45.202765  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:45.257625  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:45.257701  207930 retry.go:31] will retry after 1.231190246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:45.416376  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:45.698732  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:45.755705  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:45.755743  207930 retry.go:31] will retry after 975.429295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.489752  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:46.545290  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.545326  207930 retry.go:31] will retry after 1.502139969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.731733  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:46.787009  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.787042  207930 retry.go:31] will retry after 2.693302994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:47.416975  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:48.048320  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:48.103285  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:48.103320  207930 retry.go:31] will retry after 2.181453682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:49.480527  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:49.538700  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:49.538734  207930 retry.go:31] will retry after 4.218840209s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:49.916480  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:50.284976  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:50.341540  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:50.341577  207930 retry.go:31] will retry after 1.691103888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:52.033656  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:52.088485  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:52.088524  207930 retry.go:31] will retry after 2.514845713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:52.416328  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:53.758082  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:53.814997  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:53.815038  207930 retry.go:31] will retry after 5.532299656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:54.416935  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:54.604251  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:54.659736  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:54.659775  207930 retry.go:31] will retry after 3.993767117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:56.916955  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:58.654616  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:58.713410  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:58.713444  207930 retry.go:31] will retry after 9.568142224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:59.347766  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:59.404337  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:59.404389  207930 retry.go:31] will retry after 6.225933732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:59.417079  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:01.916592  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:03.916927  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:05.630497  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:05.685477  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:05.685519  207930 retry.go:31] will retry after 12.822953608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:06.417252  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:08.282818  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:39:08.337692  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:08.337725  207930 retry.go:31] will retry after 7.236832581s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:08.916334  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:10.916501  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:12.917166  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:15.416556  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:15.574832  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:39:15.630769  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:15.630807  207930 retry.go:31] will retry after 32.093842325s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:17.417148  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:18.509437  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:18.569821  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:18.569859  207930 retry.go:31] will retry after 8.204907126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:19.917021  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:22.416351  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:24.416692  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:26.775723  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:26.830536  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:26.830579  207930 retry.go:31] will retry after 15.287470649s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:26.916248  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:28.916363  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:30.916660  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:33.416644  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:35.916574  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:37.917054  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:40.416285  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:42.118997  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:42.177198  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:42.177233  207930 retry.go:31] will retry after 19.60601903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:42.417176  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:44.916569  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:46.916954  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:47.725338  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:39:47.781475  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:47.781505  207930 retry.go:31] will retry after 23.586099799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:49.416272  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:51.416491  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:53.416753  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:55.417076  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:57.916443  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:00.416299  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:40:01.784079  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:40:01.841854  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:40:01.842020  207930 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 19:40:02.416680  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:04.417047  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:06.917026  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:09.417084  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:40:11.367994  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:40:11.424607  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:40:11.424769  207930 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:40:11.426781  207930 out.go:179] * Enabled addons: 
	I1009 19:40:11.428422  207930 addons.go:514] duration metric: took 1m28.151542071s for enable addons: enabled=[]
	W1009 19:40:11.916694  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:14.416598  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:16.416881  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:18.916218  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:20.916269  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:22.916576  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:24.917209  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:27.416502  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:29.916343  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:31.916934  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:34.416282  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:36.416512  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:38.416629  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:40.916603  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:42.917115  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:44.917161  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:47.416324  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:49.416455  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:51.416819  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:53.916417  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:55.916617  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:57.917078  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:00.417118  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:02.916234  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:04.916660  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:07.417218  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:09.916259  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:11.916515  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:13.917072  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:16.416622  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:18.916522  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:21.416789  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:23.916210  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:26.416522  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:28.916461  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:31.416717  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:33.916099  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:36.416332  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:38.916209  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:40.917157  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:43.416729  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:45.916510  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:47.916984  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:50.416131  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:52.416502  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:54.416618  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:56.416678  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:58.916609  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:01.416745  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:03.416958  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:05.916865  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:07.917104  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:10.416213  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:12.416339  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:14.416628  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:16.416997  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:18.916238  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:20.917143  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:23.416620  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:25.916367  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:27.916596  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:30.416273  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:32.916182  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:34.916794  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:37.416177  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:39.916144  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:41.916475  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:44.416966  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:46.916415  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:49.416202  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:51.416497  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:53.416539  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:55.916219  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:58.416153  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:00.417148  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:02.916226  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:04.916621  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:07.416226  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:09.916189  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:11.916327  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:13.916488  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:15.917040  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:18.416211  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:20.916112  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:22.916253  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:24.916887  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:26.917148  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:29.417134  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:31.916366  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:33.916553  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:36.416538  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:38.416719  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:40.916669  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:42.916983  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:45.416239  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:47.916206  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:50.417144  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:52.917059  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:55.416176  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:57.916181  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:59.916956  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:02.416315  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:04.416660  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:06.416831  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:08.417094  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:10.916165  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:12.916294  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:14.916520  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:16.916888  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:19.416178  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:21.416435  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:23.416520  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:25.417135  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:27.916344  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:30.416306  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:32.416429  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:34.416783  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:36.916172  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:39.416191  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:41.416480  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:44:43.416121  207930 node_ready.go:38] duration metric: took 6m0.000528946s for node "ha-898615" to be "Ready" ...
	I1009 19:44:43.418255  207930 out.go:203] 
	W1009 19:44:43.419680  207930 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:44:43.419700  207930 out.go:285] * 
	W1009 19:44:43.421462  207930 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:44:43.422822  207930 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:44:34 ha-898615 crio[519]: time="2025-10-09T19:44:34.549971848Z" level=info msg="createCtr: removing container fc426c179253de3c5286d6424aa571281a653237ad6481a740cca377e8b5a7a0" id=9b651ebc-2874-41fc-9b26-759216cb0f18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:34 ha-898615 crio[519]: time="2025-10-09T19:44:34.550007454Z" level=info msg="createCtr: deleting container fc426c179253de3c5286d6424aa571281a653237ad6481a740cca377e8b5a7a0 from storage" id=9b651ebc-2874-41fc-9b26-759216cb0f18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:34 ha-898615 crio[519]: time="2025-10-09T19:44:34.552201133Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=9b651ebc-2874-41fc-9b26-759216cb0f18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.52508731Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b5661c60-1300-4632-aaf7-bdeeac864bc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.526055709Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=1725e831-dbda-472e-986a-63cc1de3e757 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.527117412Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-898615/kube-apiserver" id=899ed538-a4e2-4701-8914-e39461a177f4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.5273638Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.530915074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.531527799Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.545578816Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=899ed538-a4e2-4701-8914-e39461a177f4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.54700936Z" level=info msg="createCtr: deleting container ID 9bc0044ce65cd09f8ebcf2e321d035b54d500f0a1a35b9b784ebc22754281edd from idIndex" id=899ed538-a4e2-4701-8914-e39461a177f4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.547050058Z" level=info msg="createCtr: removing container 9bc0044ce65cd09f8ebcf2e321d035b54d500f0a1a35b9b784ebc22754281edd" id=899ed538-a4e2-4701-8914-e39461a177f4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.54708322Z" level=info msg="createCtr: deleting container 9bc0044ce65cd09f8ebcf2e321d035b54d500f0a1a35b9b784ebc22754281edd from storage" id=899ed538-a4e2-4701-8914-e39461a177f4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:36 ha-898615 crio[519]: time="2025-10-09T19:44:36.549281769Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-898615_kube-system_aad558bc9f2efde8d3c90d373798de18_0" id=899ed538-a4e2-4701-8914-e39461a177f4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.52406925Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=e3528edc-b46b-4c43-b502-1584b9f192b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.525117742Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=87b20685-79e2-458d-af66-e89814136ec1 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.526139813Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-898615/kube-controller-manager" id=35169ac7-542c-4f30-a813-a1ee3b7a8393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.526359081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.529823317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.530449796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.54648838Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=35169ac7-542c-4f30-a813-a1ee3b7a8393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.547848534Z" level=info msg="createCtr: deleting container ID 0d518b35fc4962587c6c0c28fd540b49da12c05e477b6cfa87ac0ac0ee9fcc9b from idIndex" id=35169ac7-542c-4f30-a813-a1ee3b7a8393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.547885787Z" level=info msg="createCtr: removing container 0d518b35fc4962587c6c0c28fd540b49da12c05e477b6cfa87ac0ac0ee9fcc9b" id=35169ac7-542c-4f30-a813-a1ee3b7a8393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.547920392Z" level=info msg="createCtr: deleting container 0d518b35fc4962587c6c0c28fd540b49da12c05e477b6cfa87ac0ac0ee9fcc9b from storage" id=35169ac7-542c-4f30-a813-a1ee3b7a8393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.550180502Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=35169ac7-542c-4f30-a813-a1ee3b7a8393 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:44:46.270509    2180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:44:46.271208    2180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:44:46.272929    2180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:44:46.273434    2180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:44:46.274735    2180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:44:46 up  1:27,  0 user,  load average: 0.03, 0.06, 1.10
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:44:36 ha-898615 kubelet[666]: E1009 19:44:36.549624     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:44:36 ha-898615 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:44:36 ha-898615 kubelet[666]:  > podSandboxID="6cf5e76a918ecf34d99855ac661a1e6984a2f2e13969711afc706e556815ec7b"
	Oct 09 19:44:36 ha-898615 kubelet[666]: E1009 19:44:36.549752     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:44:36 ha-898615 kubelet[666]:         container kube-apiserver start failed in pod kube-apiserver-ha-898615_kube-system(aad558bc9f2efde8d3c90d373798de18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:44:36 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:44:36 ha-898615 kubelet[666]: E1009 19:44:36.549787     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-898615" podUID="aad558bc9f2efde8d3c90d373798de18"
	Oct 09 19:44:37 ha-898615 kubelet[666]: E1009 19:44:37.263207     666 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 19:44:38 ha-898615 kubelet[666]: E1009 19:44:38.168243     666 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:44:38 ha-898615 kubelet[666]: I1009 19:44:38.339599     666 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:44:38 ha-898615 kubelet[666]: E1009 19:44:38.340003     666 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:44:39 ha-898615 kubelet[666]: E1009 19:44:39.614942     666 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-898615&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 09 19:44:40 ha-898615 kubelet[666]: E1009 19:44:40.477952     666 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-898615.186ce9e49e54e3dd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-898615,UID:ha-898615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-898615 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-898615,},FirstTimestamp:2025-10-09 19:38:42.513200093 +0000 UTC m=+0.079441148,LastTimestamp:2025-10-09 19:38:42.513200093 +0000 UTC m=+0.079441148,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-898615,}"
	Oct 09 19:44:40 ha-898615 kubelet[666]: E1009 19:44:40.523603     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:44:40 ha-898615 kubelet[666]: E1009 19:44:40.550531     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:44:40 ha-898615 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:44:40 ha-898615 kubelet[666]:  > podSandboxID="6b25866fd5abc60bd238bd9a662548c51d322e9ed30360455db0617325fb150e"
	Oct 09 19:44:40 ha-898615 kubelet[666]: E1009 19:44:40.550644     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:44:40 ha-898615 kubelet[666]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:44:40 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:44:40 ha-898615 kubelet[666]: E1009 19:44:40.550675     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	Oct 09 19:44:42 ha-898615 kubelet[666]: E1009 19:44:42.539969     666 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	Oct 09 19:44:45 ha-898615 kubelet[666]: E1009 19:44:45.169650     666 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:44:45 ha-898615 kubelet[666]: I1009 19:44:45.342395     666 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:44:45 ha-898615 kubelet[666]: E1009 19:44:45.342851     666 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 2 (300.692876ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (1.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-898615" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-898615\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-898615\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-898615\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208124,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:38:36.214223554Z",
	            "FinishedAt": "2025-10-09T19:38:35.052463553Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed39874bd8a00ff4fec1cf869ad7e0f72bd903d36f0543c07f3bdadae1a02c8a",
	            "SandboxKey": "/var/run/docker/netns/ed39874bd8a0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:c8:1b:ff:df:39",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "f44155a867b0645d8a2be6662daaecc287c0af49551a1cb0ce8c095eaa3c9fd2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 2 (302.126084ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-898615 kubectl -- rollout status deployment/busybox                      │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node add --alsologtostderr -v 5                                   │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node stop m02 --alsologtostderr -v 5                              │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node start m02 --alsologtostderr -v 5                             │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node list --alsologtostderr -v 5                                  │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ stop    │ ha-898615 stop --alsologtostderr -v 5                                       │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ ha-898615 start --wait true --alsologtostderr -v 5                          │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ node    │ ha-898615 node list --alsologtostderr -v 5                                  │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	│ node    │ ha-898615 node delete m03 --alsologtostderr -v 5                            │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:38:35
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:38:35.975523  207930 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:38:35.975809  207930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:35.975820  207930 out.go:374] Setting ErrFile to fd 2...
	I1009 19:38:35.975824  207930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:35.976017  207930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:38:35.976529  207930 out.go:368] Setting JSON to false
	I1009 19:38:35.977520  207930 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4865,"bootTime":1760033851,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:38:35.977618  207930 start.go:143] virtualization: kvm guest
	I1009 19:38:35.979911  207930 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:38:35.981312  207930 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:38:35.981323  207930 notify.go:221] Checking for updates...
	I1009 19:38:35.983929  207930 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:38:35.985330  207930 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:35.986909  207930 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:38:35.988196  207930 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:38:35.989553  207930 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:38:35.991338  207930 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:35.991495  207930 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:38:36.015602  207930 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:38:36.015757  207930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:36.075307  207930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:38:36.06526946 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:38:36.075425  207930 docker.go:319] overlay module found
	I1009 19:38:36.077367  207930 out.go:179] * Using the docker driver based on existing profile
	I1009 19:38:36.078862  207930 start.go:309] selected driver: docker
	I1009 19:38:36.078876  207930 start.go:930] validating driver "docker" against &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:36.078976  207930 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:38:36.079059  207930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:36.140960  207930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:38:36.131484248 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:38:36.141642  207930 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:38:36.141674  207930 cni.go:84] Creating CNI manager for ""
	I1009 19:38:36.141735  207930 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:38:36.141786  207930 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 19:38:36.143505  207930 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:38:36.144834  207930 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:38:36.146099  207930 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:38:36.147345  207930 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:38:36.147407  207930 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:38:36.147422  207930 cache.go:58] Caching tarball of preloaded images
	I1009 19:38:36.147438  207930 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:38:36.147532  207930 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:38:36.147545  207930 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:38:36.147660  207930 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:38:36.167793  207930 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:38:36.167815  207930 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:38:36.167836  207930 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:38:36.167869  207930 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:36.167944  207930 start.go:365] duration metric: took 50.923µs to acquireMachinesLock for "ha-898615"
	I1009 19:38:36.167966  207930 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:38:36.167995  207930 fix.go:55] fixHost starting: 
	I1009 19:38:36.168216  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:36.186209  207930 fix.go:113] recreateIfNeeded on ha-898615: state=Stopped err=<nil>
	W1009 19:38:36.186255  207930 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:38:36.188183  207930 out.go:252] * Restarting existing docker container for "ha-898615" ...
	I1009 19:38:36.188284  207930 cli_runner.go:164] Run: docker start ha-898615
	I1009 19:38:36.429165  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:36.449048  207930 kic.go:430] container "ha-898615" state is running.
	I1009 19:38:36.449470  207930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:36.468830  207930 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:38:36.469116  207930 machine.go:93] provisionDockerMachine start ...
	I1009 19:38:36.469193  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:36.488569  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:36.488848  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:36.488870  207930 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:38:36.489575  207930 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41826->127.0.0.1:32788: read: connection reset by peer
	I1009 19:38:39.637605  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:38:39.637634  207930 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:38:39.637693  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:39.655862  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:39.656140  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:39.656156  207930 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:38:39.812565  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:38:39.812645  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:39.831046  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:39.831304  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:39.831326  207930 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:38:39.979591  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:38:39.979628  207930 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:38:39.979662  207930 ubuntu.go:190] setting up certificates
	I1009 19:38:39.979675  207930 provision.go:84] configureAuth start
	I1009 19:38:39.979738  207930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:39.997703  207930 provision.go:143] copyHostCerts
	I1009 19:38:39.997746  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:38:39.997777  207930 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:38:39.997806  207930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:38:39.997879  207930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:38:39.997970  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:38:39.997989  207930 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:38:39.997996  207930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:38:39.998029  207930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:38:39.998077  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:38:39.998096  207930 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:38:39.998102  207930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:38:39.998125  207930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:38:39.998178  207930 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:38:40.329941  207930 provision.go:177] copyRemoteCerts
	I1009 19:38:40.330005  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:38:40.330048  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.348609  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:40.453024  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:38:40.453090  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:38:40.471037  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:38:40.471100  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:38:40.488791  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:38:40.488882  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:38:40.506540  207930 provision.go:87] duration metric: took 526.848912ms to configureAuth
	I1009 19:38:40.506573  207930 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:38:40.506763  207930 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:40.506890  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.524930  207930 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:40.525160  207930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 19:38:40.525178  207930 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:38:40.786346  207930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:38:40.786372  207930 machine.go:96] duration metric: took 4.31723847s to provisionDockerMachine
	I1009 19:38:40.786407  207930 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:38:40.786419  207930 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:38:40.786479  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:38:40.786518  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.804162  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:40.908341  207930 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:38:40.911873  207930 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:38:40.911904  207930 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:38:40.911923  207930 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:38:40.911983  207930 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:38:40.912072  207930 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:38:40.912085  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:38:40.912183  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:38:40.919808  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:38:40.937277  207930 start.go:297] duration metric: took 150.853989ms for postStartSetup
	I1009 19:38:40.937349  207930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:38:40.937424  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:40.955872  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:41.055635  207930 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:38:41.060222  207930 fix.go:57] duration metric: took 4.892219254s for fixHost
	I1009 19:38:41.060252  207930 start.go:84] releasing machines lock for "ha-898615", held for 4.892295934s
	I1009 19:38:41.060315  207930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:38:41.078135  207930 ssh_runner.go:195] Run: cat /version.json
	I1009 19:38:41.078202  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:41.078238  207930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:38:41.078301  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:41.096227  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:41.096500  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:41.197266  207930 ssh_runner.go:195] Run: systemctl --version
	I1009 19:38:41.254878  207930 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:38:41.291881  207930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:38:41.296935  207930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:38:41.297063  207930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:38:41.305687  207930 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:38:41.305714  207930 start.go:496] detecting cgroup driver to use...
	I1009 19:38:41.305778  207930 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:38:41.305833  207930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:38:41.320848  207930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:38:41.334341  207930 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:38:41.334430  207930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:38:41.350433  207930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:38:41.364693  207930 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:38:41.444310  207930 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:38:41.525534  207930 docker.go:234] disabling docker service ...
	I1009 19:38:41.525603  207930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:38:41.540323  207930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:38:41.553168  207930 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:38:41.632212  207930 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:38:41.711096  207930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:38:41.724923  207930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:38:41.740807  207930 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:38:41.740860  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.750143  207930 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:38:41.750201  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.759647  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.768954  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.778411  207930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:38:41.786985  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.796139  207930 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.804565  207930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:41.813340  207930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:38:41.821627  207930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:38:41.829434  207930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:41.907787  207930 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:38:42.015071  207930 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:38:42.015128  207930 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:38:42.019190  207930 start.go:564] Will wait 60s for crictl version
	I1009 19:38:42.019246  207930 ssh_runner.go:195] Run: which crictl
	I1009 19:38:42.022757  207930 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:38:42.047602  207930 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:38:42.047669  207930 ssh_runner.go:195] Run: crio --version
	I1009 19:38:42.076709  207930 ssh_runner.go:195] Run: crio --version
	I1009 19:38:42.108280  207930 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:38:42.109626  207930 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:38:42.127160  207930 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:38:42.131748  207930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:38:42.142508  207930 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:38:42.142654  207930 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:38:42.142740  207930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:38:42.176610  207930 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:38:42.176633  207930 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:38:42.176682  207930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:38:42.202986  207930 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:38:42.203009  207930 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:38:42.203021  207930 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:38:42.203143  207930 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:38:42.203236  207930 ssh_runner.go:195] Run: crio config
	I1009 19:38:42.252192  207930 cni.go:84] Creating CNI manager for ""
	I1009 19:38:42.252222  207930 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:38:42.252244  207930 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:38:42.252274  207930 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:38:42.252455  207930 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:38:42.252533  207930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:38:42.261297  207930 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:38:42.261376  207930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:38:42.269740  207930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:38:42.283297  207930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:38:42.296911  207930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:38:42.310550  207930 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:38:42.314737  207930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:38:42.325511  207930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:42.406041  207930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:42.431169  207930 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:38:42.431199  207930 certs.go:195] generating shared ca certs ...
	I1009 19:38:42.431223  207930 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:42.431407  207930 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:38:42.431466  207930 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:38:42.431481  207930 certs.go:257] generating profile certs ...
	I1009 19:38:42.431609  207930 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:38:42.431640  207930 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd
	I1009 19:38:42.431668  207930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 19:38:42.592908  207930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd ...
	I1009 19:38:42.592943  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd: {Name:mkbeae8ef9cb7280e84a8eafb5e4ed5a9f929f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:42.593120  207930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd ...
	I1009 19:38:42.593133  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd: {Name:mk2a0878011f7339a4c02515e180398732017ed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:42.593209  207930 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt.ff60cacd -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt
	I1009 19:38:42.593374  207930 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key
	I1009 19:38:42.593552  207930 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:38:42.593571  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:38:42.593584  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:38:42.593597  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:38:42.593608  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:38:42.593621  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:38:42.593631  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:38:42.593644  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:38:42.593653  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:38:42.593711  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:38:42.593739  207930 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:38:42.593749  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:38:42.593769  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:38:42.593790  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:38:42.593810  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:38:42.593855  207930 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:38:42.593880  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.593893  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.593905  207930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.594443  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:38:42.612666  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:38:42.632698  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:38:42.651700  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:38:42.670307  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 19:38:42.688542  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:38:42.707425  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:38:42.725187  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:38:42.743982  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:38:42.762241  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:38:42.780038  207930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:38:42.798203  207930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:38:42.811431  207930 ssh_runner.go:195] Run: openssl version
	I1009 19:38:42.818209  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:38:42.827928  207930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.831986  207930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.832055  207930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:42.866756  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:38:42.875613  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:38:42.884840  207930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.888808  207930 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.888867  207930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:38:42.923046  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:38:42.932828  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:38:42.943293  207930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.948839  207930 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.948923  207930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:38:42.995613  207930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:38:43.005104  207930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:38:43.009159  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:38:43.044274  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:38:43.079173  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:38:43.114440  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:38:43.149740  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:38:43.185030  207930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:38:43.220218  207930 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:43.220324  207930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:38:43.220402  207930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:38:43.248594  207930 cri.go:89] found id: ""
	I1009 19:38:43.248669  207930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:38:43.257055  207930 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:38:43.257079  207930 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:38:43.257130  207930 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:38:43.264639  207930 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:38:43.265056  207930 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:43.265186  207930 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-137890/kubeconfig needs updating (will repair): [kubeconfig missing "ha-898615" cluster setting kubeconfig missing "ha-898615" context setting]
	I1009 19:38:43.265530  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:43.266090  207930 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:38:43.266647  207930 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:38:43.266666  207930 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:38:43.266673  207930 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:38:43.266678  207930 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:38:43.266683  207930 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:38:43.266709  207930 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:38:43.267074  207930 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:38:43.275909  207930 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:38:43.275950  207930 kubeadm.go:601] duration metric: took 18.863916ms to restartPrimaryControlPlane
	I1009 19:38:43.275961  207930 kubeadm.go:402] duration metric: took 55.75684ms to StartCluster
	I1009 19:38:43.275983  207930 settings.go:142] acquiring lock: {Name:mk34466033fb866bc8e167d2d953624ad0802283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:43.276054  207930 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:38:43.276601  207930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:43.276813  207930 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:38:43.276876  207930 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:38:43.277003  207930 addons.go:69] Setting storage-provisioner=true in profile "ha-898615"
	I1009 19:38:43.277022  207930 addons.go:238] Setting addon storage-provisioner=true in "ha-898615"
	I1009 19:38:43.277040  207930 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:43.277064  207930 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:43.277014  207930 addons.go:69] Setting default-storageclass=true in profile "ha-898615"
	I1009 19:38:43.277117  207930 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-898615"
	I1009 19:38:43.277359  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:43.277562  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:43.280740  207930 out.go:179] * Verifying Kubernetes components...
	I1009 19:38:43.282030  207930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:43.297922  207930 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:38:43.297964  207930 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:38:43.298366  207930 addons.go:238] Setting addon default-storageclass=true in "ha-898615"
	I1009 19:38:43.298425  207930 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:38:43.298892  207930 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:38:43.299418  207930 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:43.299443  207930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:38:43.299502  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:43.322859  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:43.328834  207930 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:38:43.328858  207930 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:38:43.328930  207930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:38:43.352463  207930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:38:43.401305  207930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:43.415541  207930 node_ready.go:35] waiting up to 6m0s for node "ha-898615" to be "Ready" ...
	I1009 19:38:43.436410  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:43.466671  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:43.496356  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.496416  207930 retry.go:31] will retry after 342.264655ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:43.525246  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.525289  207930 retry.go:31] will retry after 174.41945ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.701041  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:43.758027  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.758057  207930 retry.go:31] will retry after 209.535579ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.839271  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:43.896734  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.896774  207930 retry.go:31] will retry after 538.756932ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:43.968448  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:44.023415  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.023454  207930 retry.go:31] will retry after 556.953167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.436515  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:44.490407  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.490440  207930 retry.go:31] will retry after 711.386877ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.580632  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:44.634616  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:44.634648  207930 retry.go:31] will retry after 1.063862903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:45.202765  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:45.257625  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:45.257701  207930 retry.go:31] will retry after 1.231190246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:45.416376  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:45.698732  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:45.755705  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:45.755743  207930 retry.go:31] will retry after 975.429295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.489752  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:46.545290  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.545326  207930 retry.go:31] will retry after 1.502139969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.731733  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:46.787009  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:46.787042  207930 retry.go:31] will retry after 2.693302994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:47.416975  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:48.048320  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:48.103285  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:48.103320  207930 retry.go:31] will retry after 2.181453682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:49.480527  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:49.538700  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:49.538734  207930 retry.go:31] will retry after 4.218840209s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:49.916480  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:50.284976  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:50.341540  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:50.341577  207930 retry.go:31] will retry after 1.691103888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:52.033656  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:52.088485  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:52.088524  207930 retry.go:31] will retry after 2.514845713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:52.416328  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:53.758082  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:53.814997  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:53.815038  207930 retry.go:31] will retry after 5.532299656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:54.416935  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:54.604251  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:54.659736  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:54.659775  207930 retry.go:31] will retry after 3.993767117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:56.916955  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:38:58.654616  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:38:58.713410  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:58.713444  207930 retry.go:31] will retry after 9.568142224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:59.347766  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:38:59.404337  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:38:59.404389  207930 retry.go:31] will retry after 6.225933732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:38:59.417079  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:01.916592  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:03.916927  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:05.630497  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:05.685477  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:05.685519  207930 retry.go:31] will retry after 12.822953608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:06.417252  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:08.282818  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:39:08.337692  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:08.337725  207930 retry.go:31] will retry after 7.236832581s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:08.916334  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:10.916501  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:12.917166  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:15.416556  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:15.574832  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:39:15.630769  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:15.630807  207930 retry.go:31] will retry after 32.093842325s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:17.417148  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:18.509437  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:18.569821  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:18.569859  207930 retry.go:31] will retry after 8.204907126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:19.917021  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:22.416351  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:24.416692  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:26.775723  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:26.830536  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:26.830579  207930 retry.go:31] will retry after 15.287470649s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:26.916248  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:28.916363  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:30.916660  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:33.416644  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:35.916574  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:37.917054  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:40.416285  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:42.118997  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:39:42.177198  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:42.177233  207930 retry.go:31] will retry after 19.60601903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:42.417176  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:44.916569  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:46.916954  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:39:47.725338  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:39:47.781475  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:39:47.781505  207930 retry.go:31] will retry after 23.586099799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:39:49.416272  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:51.416491  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:53.416753  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:55.417076  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:39:57.916443  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:00.416299  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:40:01.784079  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:40:01.841854  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:40:01.842020  207930 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 19:40:02.416680  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:04.417047  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:06.917026  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:09.417084  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:40:11.367994  207930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:40:11.424607  207930 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:40:11.424769  207930 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:40:11.426781  207930 out.go:179] * Enabled addons: 
	I1009 19:40:11.428422  207930 addons.go:514] duration metric: took 1m28.151542071s for enable addons: enabled=[]
	W1009 19:40:11.916694  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:14.416598  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:16.416881  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:18.916218  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:20.916269  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:22.916576  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:24.917209  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:27.416502  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:29.916343  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:31.916934  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:34.416282  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:36.416512  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:38.416629  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:40.916603  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:42.917115  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:44.917161  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:47.416324  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:49.416455  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:51.416819  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:53.916417  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:55.916617  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:40:57.917078  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:00.417118  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:02.916234  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:04.916660  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:07.417218  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:09.916259  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:11.916515  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:13.917072  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:16.416622  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:18.916522  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:21.416789  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:23.916210  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:26.416522  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:28.916461  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:31.416717  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:33.916099  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:36.416332  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:38.916209  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:40.917157  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:43.416729  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:45.916510  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:47.916984  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:50.416131  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:52.416502  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:54.416618  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:56.416678  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:41:58.916609  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:01.416745  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:03.416958  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:05.916865  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:07.917104  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:10.416213  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:12.416339  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:14.416628  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:16.416997  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:18.916238  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:20.917143  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:23.416620  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:25.916367  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:27.916596  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:30.416273  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:32.916182  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:34.916794  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:37.416177  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:39.916144  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:41.916475  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:44.416966  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:46.916415  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:49.416202  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:51.416497  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:53.416539  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:55.916219  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:42:58.416153  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:00.417148  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:02.916226  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:04.916621  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:07.416226  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:09.916189  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:11.916327  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:13.916488  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:15.917040  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:18.416211  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:20.916112  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:22.916253  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:24.916887  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:26.917148  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:29.417134  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:31.916366  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:33.916553  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:36.416538  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:38.416719  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:40.916669  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:42.916983  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:45.416239  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:47.916206  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:50.417144  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:52.917059  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:55.416176  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:57.916181  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:43:59.916956  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:02.416315  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:04.416660  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:06.416831  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:08.417094  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:10.916165  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:12.916294  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:14.916520  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:16.916888  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:19.416178  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:21.416435  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:23.416520  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:25.417135  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:27.916344  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:30.416306  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:32.416429  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:34.416783  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:36.916172  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:39.416191  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:44:41.416480  207930 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:44:43.416121  207930 node_ready.go:38] duration metric: took 6m0.000528946s for node "ha-898615" to be "Ready" ...
	I1009 19:44:43.418255  207930 out.go:203] 
	W1009 19:44:43.419680  207930 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:44:43.419700  207930 out.go:285] * 
	W1009 19:44:43.421462  207930 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:44:43.422822  207930 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.547885787Z" level=info msg="createCtr: removing container 0d518b35fc4962587c6c0c28fd540b49da12c05e477b6cfa87ac0ac0ee9fcc9b" id=35169ac7-542c-4f30-a813-a1ee3b7a8393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.547920392Z" level=info msg="createCtr: deleting container 0d518b35fc4962587c6c0c28fd540b49da12c05e477b6cfa87ac0ac0ee9fcc9b from storage" id=35169ac7-542c-4f30-a813-a1ee3b7a8393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:40 ha-898615 crio[519]: time="2025-10-09T19:44:40.550180502Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=35169ac7-542c-4f30-a813-a1ee3b7a8393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.524988217Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=43a5b6a6-f84d-49cd-8c5a-c128f026669a name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.525136579Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=e5d838d9-f724-4223-b0ab-a9fb247d885e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.526996815Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=6250f15f-4100-42b0-8e57-2cb496af37ce name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.527025675Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c0395c98-d71f-4792-9ecd-4cfada852fe9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.527912288Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-898615/kube-apiserver" id=eee5beec-9ad9-4942-817f-e85eb985a738 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.527932447Z" level=info msg="Creating container: kube-system/etcd-ha-898615/etcd" id=cbb2d9dc-9a35-48ef-84bc-ce6518d74cdf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.528120676Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.528329702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.53312099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.533550507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.534573617Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.535086521Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.556360903Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=cbb2d9dc-9a35-48ef-84bc-ce6518d74cdf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.557576325Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=eee5beec-9ad9-4942-817f-e85eb985a738 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.557768324Z" level=info msg="createCtr: deleting container ID 800f4656e5e37bbe494d15d269cd01f6abaffe6c73e52a8cd397a0258b238b35 from idIndex" id=cbb2d9dc-9a35-48ef-84bc-ce6518d74cdf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.557805206Z" level=info msg="createCtr: removing container 800f4656e5e37bbe494d15d269cd01f6abaffe6c73e52a8cd397a0258b238b35" id=cbb2d9dc-9a35-48ef-84bc-ce6518d74cdf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.557845523Z" level=info msg="createCtr: deleting container 800f4656e5e37bbe494d15d269cd01f6abaffe6c73e52a8cd397a0258b238b35 from storage" id=cbb2d9dc-9a35-48ef-84bc-ce6518d74cdf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.558888197Z" level=info msg="createCtr: deleting container ID 96ba9d198eac03db7521e0f5be4c70a2153f82a1a91864a437b52e715de2407f from idIndex" id=eee5beec-9ad9-4942-817f-e85eb985a738 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.55891952Z" level=info msg="createCtr: removing container 96ba9d198eac03db7521e0f5be4c70a2153f82a1a91864a437b52e715de2407f" id=eee5beec-9ad9-4942-817f-e85eb985a738 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.558948995Z" level=info msg="createCtr: deleting container 96ba9d198eac03db7521e0f5be4c70a2153f82a1a91864a437b52e715de2407f from storage" id=eee5beec-9ad9-4942-817f-e85eb985a738 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.561402545Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=cbb2d9dc-9a35-48ef-84bc-ce6518d74cdf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:44:47 ha-898615 crio[519]: time="2025-10-09T19:44:47.561801193Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-898615_kube-system_aad558bc9f2efde8d3c90d373798de18_0" id=eee5beec-9ad9-4942-817f-e85eb985a738 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:44:47.894260    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:44:47.894832    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:44:47.896433    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:44:47.896922    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:44:47.898202    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:44:47 up  1:27,  0 user,  load average: 0.11, 0.07, 1.10
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:44:40 ha-898615 kubelet[666]:  > podSandboxID="6b25866fd5abc60bd238bd9a662548c51d322e9ed30360455db0617325fb150e"
	Oct 09 19:44:40 ha-898615 kubelet[666]: E1009 19:44:40.550644     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:44:40 ha-898615 kubelet[666]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:44:40 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:44:40 ha-898615 kubelet[666]: E1009 19:44:40.550675     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	Oct 09 19:44:42 ha-898615 kubelet[666]: E1009 19:44:42.539969     666 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	Oct 09 19:44:45 ha-898615 kubelet[666]: E1009 19:44:45.169650     666 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:44:45 ha-898615 kubelet[666]: I1009 19:44:45.342395     666 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:44:45 ha-898615 kubelet[666]: E1009 19:44:45.342851     666 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:44:47 ha-898615 kubelet[666]: E1009 19:44:47.524524     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:44:47 ha-898615 kubelet[666]: E1009 19:44:47.524695     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:44:47 ha-898615 kubelet[666]: E1009 19:44:47.561706     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:44:47 ha-898615 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:44:47 ha-898615 kubelet[666]:  > podSandboxID="d12bd4f35ce0eeec3e602b9b67aa62ea01b9a7b86cb07b552982f765e8d84f7a"
	Oct 09 19:44:47 ha-898615 kubelet[666]: E1009 19:44:47.561807     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:44:47 ha-898615 kubelet[666]:         container etcd start failed in pod etcd-ha-898615_kube-system(2d43b86ac03e93e11f35c923e2560103): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:44:47 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:44:47 ha-898615 kubelet[666]: E1009 19:44:47.561839     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-898615" podUID="2d43b86ac03e93e11f35c923e2560103"
	Oct 09 19:44:47 ha-898615 kubelet[666]: E1009 19:44:47.562011     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:44:47 ha-898615 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:44:47 ha-898615 kubelet[666]:  > podSandboxID="6cf5e76a918ecf34d99855ac661a1e6984a2f2e13969711afc706e556815ec7b"
	Oct 09 19:44:47 ha-898615 kubelet[666]: E1009 19:44:47.562080     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:44:47 ha-898615 kubelet[666]:         container kube-apiserver start failed in pod kube-apiserver-ha-898615_kube-system(aad558bc9f2efde8d3c90d373798de18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:44:47 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:44:47 ha-898615 kubelet[666]: E1009 19:44:47.563233     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-898615" podUID="aad558bc9f2efde8d3c90d373798de18"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 2 (293.55631ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-898615 stop --alsologtostderr -v 5: (1.211115709s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5: exit status 7 (67.916394ms)

                                                
                                                
-- stdout --
	ha-898615
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:44:49.536492  213472 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:44:49.536754  213472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:49.536763  213472 out.go:374] Setting ErrFile to fd 2...
	I1009 19:44:49.536767  213472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:49.536942  213472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:44:49.537115  213472 out.go:368] Setting JSON to false
	I1009 19:44:49.537139  213472 mustload.go:65] Loading cluster: ha-898615
	I1009 19:44:49.537255  213472 notify.go:221] Checking for updates...
	I1009 19:44:49.537530  213472 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:49.537545  213472 status.go:174] checking status of ha-898615 ...
	I1009 19:44:49.537921  213472 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:49.555435  213472 status.go:371] ha-898615 host status = "Stopped" (err=<nil>)
	I1009 19:44:49.555457  213472 status.go:384] host is not running, skipping remaining checks
	I1009 19:44:49.555463  213472 status.go:176] ha-898615 status: &{Name:ha-898615 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5": ha-898615
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5": ha-898615
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-898615 status --alsologtostderr -v 5": ha-898615
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:38:36.214223554Z",
	            "FinishedAt": "2025-10-09T19:44:48.613925122Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 7 (76.298981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "ha-898615" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (368.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1009 19:45:00.257279  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:37.176904  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (6m7.123253361s)

                                                
                                                
-- stdout --
	* [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:44:49.701374  213529 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:44:49.701684  213529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:49.701694  213529 out.go:374] Setting ErrFile to fd 2...
	I1009 19:44:49.701699  213529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:49.701891  213529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:44:49.702347  213529 out.go:368] Setting JSON to false
	I1009 19:44:49.703363  213529 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5239,"bootTime":1760033851,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:44:49.703499  213529 start.go:143] virtualization: kvm guest
	I1009 19:44:49.705480  213529 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:44:49.706677  213529 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:44:49.706680  213529 notify.go:221] Checking for updates...
	I1009 19:44:49.709030  213529 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:44:49.710400  213529 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:44:49.711704  213529 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:44:49.712804  213529 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:44:49.713905  213529 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:44:49.715428  213529 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:49.715879  213529 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:44:49.737923  213529 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:44:49.738109  213529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:44:49.796426  213529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:44:49.785317755 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:44:49.796539  213529 docker.go:319] overlay module found
	I1009 19:44:49.801541  213529 out.go:179] * Using the docker driver based on existing profile
	I1009 19:44:49.802798  213529 start.go:309] selected driver: docker
	I1009 19:44:49.802817  213529 start.go:930] validating driver "docker" against &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:44:49.802903  213529 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:44:49.802989  213529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:44:49.866941  213529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:44:49.857185251 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:44:49.867781  213529 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:44:49.867825  213529 cni.go:84] Creating CNI manager for ""
	I1009 19:44:49.867876  213529 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:44:49.867941  213529 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 19:44:49.869783  213529 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:44:49.871046  213529 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:44:49.872323  213529 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:44:49.873634  213529 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:44:49.873676  213529 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:44:49.873671  213529 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:44:49.873684  213529 cache.go:58] Caching tarball of preloaded images
	I1009 19:44:49.873769  213529 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:44:49.873780  213529 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:44:49.873868  213529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:44:49.894117  213529 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:44:49.894140  213529 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:44:49.894160  213529 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:44:49.894193  213529 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:44:49.894262  213529 start.go:365] duration metric: took 46.947µs to acquireMachinesLock for "ha-898615"
	I1009 19:44:49.894284  213529 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:44:49.894295  213529 fix.go:55] fixHost starting: 
	I1009 19:44:49.894546  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:49.912866  213529 fix.go:113] recreateIfNeeded on ha-898615: state=Stopped err=<nil>
	W1009 19:44:49.912910  213529 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:44:49.914819  213529 out.go:252] * Restarting existing docker container for "ha-898615" ...
	I1009 19:44:49.914886  213529 cli_runner.go:164] Run: docker start ha-898615
	I1009 19:44:50.154621  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:50.173856  213529 kic.go:430] container "ha-898615" state is running.
	I1009 19:44:50.174272  213529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:50.192860  213529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:44:50.193122  213529 machine.go:93] provisionDockerMachine start ...
	I1009 19:44:50.193203  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:50.211807  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:50.212085  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:50.212111  213529 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:44:50.212792  213529 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45050->127.0.0.1:32793: read: connection reset by peer
	I1009 19:44:53.362882  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:44:53.362920  213529 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:44:53.363008  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:53.383229  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:53.383482  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:53.383500  213529 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:44:53.540739  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:44:53.540832  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:53.559203  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:53.559489  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:53.559515  213529 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:44:53.707903  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:44:53.707951  213529 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:44:53.707980  213529 ubuntu.go:190] setting up certificates
	I1009 19:44:53.707995  213529 provision.go:84] configureAuth start
	I1009 19:44:53.708056  213529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:53.726880  213529 provision.go:143] copyHostCerts
	I1009 19:44:53.726919  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:44:53.726954  213529 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:44:53.726969  213529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:44:53.727040  213529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:44:53.727121  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:44:53.727138  213529 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:44:53.727144  213529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:44:53.727170  213529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:44:53.727216  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:44:53.727232  213529 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:44:53.727242  213529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:44:53.727264  213529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:44:53.727314  213529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:44:53.827371  213529 provision.go:177] copyRemoteCerts
	I1009 19:44:53.827447  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:44:53.827485  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:53.846303  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:53.951136  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:44:53.951199  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:44:53.969281  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:44:53.969347  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:44:53.987249  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:44:53.987314  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:44:54.005891  213529 provision.go:87] duration metric: took 297.874582ms to configureAuth
	I1009 19:44:54.005921  213529 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:44:54.006109  213529 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:54.006224  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.024397  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:54.024626  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:54.024642  213529 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:44:54.289546  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:44:54.289573  213529 machine.go:96] duration metric: took 4.096433967s to provisionDockerMachine
	I1009 19:44:54.289589  213529 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:44:54.289601  213529 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:44:54.289664  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:44:54.289714  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.308340  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.413217  213529 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:44:54.417126  213529 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:44:54.417190  213529 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:44:54.417225  213529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:44:54.417286  213529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:44:54.417372  213529 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:44:54.417406  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:44:54.417501  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:44:54.425333  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:44:54.443854  213529 start.go:297] duration metric: took 154.246925ms for postStartSetup
	I1009 19:44:54.443940  213529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:44:54.443976  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.461915  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.563125  213529 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:44:54.568111  213529 fix.go:57] duration metric: took 4.673810177s for fixHost
	I1009 19:44:54.568142  213529 start.go:84] releasing machines lock for "ha-898615", held for 4.673868514s
	I1009 19:44:54.568206  213529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:54.586879  213529 ssh_runner.go:195] Run: cat /version.json
	I1009 19:44:54.586918  213529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:44:54.586944  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.586979  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.606718  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.607259  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.762981  213529 ssh_runner.go:195] Run: systemctl --version
	I1009 19:44:54.769817  213529 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:44:54.808737  213529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:44:54.813835  213529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:44:54.813899  213529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:44:54.822567  213529 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:44:54.822602  213529 start.go:496] detecting cgroup driver to use...
	I1009 19:44:54.822639  213529 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:44:54.822691  213529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:44:54.837558  213529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:44:54.850649  213529 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:44:54.850721  213529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:44:54.865664  213529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:44:54.878415  213529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:44:54.957542  213529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:44:55.038948  213529 docker.go:234] disabling docker service ...
	I1009 19:44:55.039033  213529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:44:55.054311  213529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:44:55.066894  213529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:44:55.146756  213529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:44:55.226886  213529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:44:55.239751  213529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:44:55.254322  213529 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:44:55.254392  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.263683  213529 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:44:55.263764  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.272570  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.281877  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.291212  213529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:44:55.299205  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.308053  213529 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.316488  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.325623  213529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:44:55.333246  213529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:44:55.340957  213529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:55.420337  213529 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:44:55.530206  213529 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:44:55.530277  213529 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:44:55.534555  213529 start.go:564] Will wait 60s for crictl version
	I1009 19:44:55.534616  213529 ssh_runner.go:195] Run: which crictl
	I1009 19:44:55.538439  213529 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:44:55.564260  213529 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:44:55.564337  213529 ssh_runner.go:195] Run: crio --version
	I1009 19:44:55.593049  213529 ssh_runner.go:195] Run: crio --version
	I1009 19:44:55.622200  213529 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:44:55.623540  213529 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:44:55.641466  213529 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:44:55.646233  213529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:44:55.657668  213529 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:44:55.657780  213529 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:44:55.657822  213529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:44:55.689903  213529 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:44:55.689929  213529 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:44:55.689989  213529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:44:55.716841  213529 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:44:55.716874  213529 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:44:55.716885  213529 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:44:55.717021  213529 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:44:55.717104  213529 ssh_runner.go:195] Run: crio config
	I1009 19:44:55.762724  213529 cni.go:84] Creating CNI manager for ""
	I1009 19:44:55.762743  213529 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:44:55.762760  213529 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:44:55.762781  213529 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:44:55.762917  213529 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:44:55.762981  213529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:44:55.771348  213529 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:44:55.771430  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:44:55.779128  213529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:44:55.792326  213529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:44:55.805801  213529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:44:55.818503  213529 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:44:55.822410  213529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:44:55.832657  213529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:55.914951  213529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:44:55.941861  213529 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:44:55.941890  213529 certs.go:195] generating shared ca certs ...
	I1009 19:44:55.941926  213529 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:55.942116  213529 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:44:55.942169  213529 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:44:55.942183  213529 certs.go:257] generating profile certs ...
	I1009 19:44:55.942287  213529 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:44:55.942359  213529 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd
	I1009 19:44:55.942424  213529 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:44:55.942440  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:44:55.942457  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:44:55.942474  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:44:55.942488  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:44:55.942501  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:44:55.942518  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:44:55.942537  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:44:55.942552  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:44:55.942619  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:44:55.942659  213529 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:44:55.942668  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:44:55.942696  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:44:55.942725  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:44:55.942757  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:44:55.942808  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:44:55.942845  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:44:55.942867  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:55.942884  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:44:55.943621  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:44:55.964066  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:44:55.983870  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:44:56.003424  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:44:56.027059  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 19:44:56.045446  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:44:56.062784  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:44:56.080346  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:44:56.098356  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:44:56.115529  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:44:56.133046  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:44:56.151123  213529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:44:56.164371  213529 ssh_runner.go:195] Run: openssl version
	I1009 19:44:56.171082  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:44:56.180682  213529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:44:56.184714  213529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:44:56.184782  213529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:44:56.219575  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:44:56.228330  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:44:56.237302  213529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:56.241163  213529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:56.241221  213529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:56.275220  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:44:56.283849  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:44:56.292853  213529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:44:56.296942  213529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:44:56.297002  213529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:44:56.331446  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:44:56.340819  213529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:44:56.344986  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:44:56.380467  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:44:56.415493  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:44:56.456227  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:44:56.501884  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:44:56.538941  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:44:56.573879  213529 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:44:56.573988  213529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:44:56.574038  213529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:44:56.602728  213529 cri.go:89] found id: ""
	I1009 19:44:56.602785  213529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:44:56.610971  213529 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:44:56.610988  213529 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:44:56.611028  213529 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:44:56.618277  213529 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:44:56.618823  213529 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:44:56.618971  213529 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-137890/kubeconfig needs updating (will repair): [kubeconfig missing "ha-898615" cluster setting kubeconfig missing "ha-898615" context setting]
	I1009 19:44:56.619299  213529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:56.619977  213529 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:44:56.620511  213529 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:44:56.620536  213529 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:44:56.620544  213529 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:44:56.620550  213529 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:44:56.620560  213529 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:44:56.620569  213529 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:44:56.621020  213529 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:44:56.628535  213529 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:44:56.628568  213529 kubeadm.go:601] duration metric: took 17.574485ms to restartPrimaryControlPlane
	I1009 19:44:56.628593  213529 kubeadm.go:402] duration metric: took 54.723918ms to StartCluster
	I1009 19:44:56.628613  213529 settings.go:142] acquiring lock: {Name:mk34466033fb866bc8e167d2d953624ad0802283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:56.628681  213529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:44:56.629423  213529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:56.629662  213529 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:44:56.629817  213529 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:56.629772  213529 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:44:56.629859  213529 addons.go:69] Setting storage-provisioner=true in profile "ha-898615"
	I1009 19:44:56.629886  213529 addons.go:238] Setting addon storage-provisioner=true in "ha-898615"
	I1009 19:44:56.629917  213529 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:44:56.629863  213529 addons.go:69] Setting default-storageclass=true in profile "ha-898615"
	I1009 19:44:56.630024  213529 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-898615"
	I1009 19:44:56.630251  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:56.630340  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:56.639537  213529 out.go:179] * Verifying Kubernetes components...
	I1009 19:44:56.640997  213529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:56.650141  213529 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:44:56.650441  213529 addons.go:238] Setting addon default-storageclass=true in "ha-898615"
	I1009 19:44:56.650481  213529 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:44:56.651083  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:56.651372  213529 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:44:56.652834  213529 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:44:56.652857  213529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:44:56.652904  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:56.672424  213529 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:44:56.672449  213529 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:44:56.672517  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:56.673443  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:56.697607  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:56.750572  213529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:44:56.764278  213529 node_ready.go:35] waiting up to 6m0s for node "ha-898615" to be "Ready" ...
	I1009 19:44:56.790989  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:44:56.807492  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:56.848417  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:56.848472  213529 retry.go:31] will retry after 181.300226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:44:56.863090  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:56.863123  213529 retry.go:31] will retry after 174.582695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.030457  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:44:57.038253  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:57.099728  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.099769  213529 retry.go:31] will retry after 488.394922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:44:57.103491  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.103516  213529 retry.go:31] will retry after 360.880737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.464716  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:57.519993  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.520038  213529 retry.go:31] will retry after 545.599641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.589293  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:57.644623  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.644657  213529 retry.go:31] will retry after 328.462818ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.973799  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:58.029584  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.029617  213529 retry.go:31] will retry after 567.831757ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.065802  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:58.119966  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.120001  213529 retry.go:31] will retry after 1.041516604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.598304  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:58.652889  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.652928  213529 retry.go:31] will retry after 716.276622ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:44:58.765698  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:44:59.162239  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:59.218055  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:59.218096  213529 retry.go:31] will retry after 1.23966397s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:59.370025  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:59.425158  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:59.425205  213529 retry.go:31] will retry after 1.359321817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:00.458325  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:00.515848  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:00.515882  213529 retry.go:31] will retry after 2.661338285s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:00.765913  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:00.785102  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:00.843571  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:00.843612  213529 retry.go:31] will retry after 2.328073619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:03.172702  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:45:03.177348  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:03.230554  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:03.230592  213529 retry.go:31] will retry after 6.157061735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:03.231964  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:03.231992  213529 retry.go:31] will retry after 2.442330177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:03.265673  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:05.674886  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:05.729807  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:05.729849  213529 retry.go:31] will retry after 3.612542584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:05.765524  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:08.265205  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:09.342654  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:45:09.388406  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:09.399682  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:09.399719  213529 retry.go:31] will retry after 6.61412336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:09.445445  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:09.445496  213529 retry.go:31] will retry after 9.139498483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:10.265494  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:12.765436  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:15.265528  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:16.014029  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:16.069677  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:16.069742  213529 retry.go:31] will retry after 11.238798751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:17.265736  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:18.585243  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:18.639573  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:18.639614  213529 retry.go:31] will retry after 11.58446266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:19.765693  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:22.265326  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:24.765252  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:27.265337  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:27.309539  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:27.366695  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:27.366733  213529 retry.go:31] will retry after 11.52939287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:29.765203  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:30.224984  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:30.281273  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:30.281310  213529 retry.go:31] will retry after 18.613032536s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:31.765443  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:34.264978  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:36.265283  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:38.265904  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:38.897369  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:38.954353  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:38.954404  213529 retry.go:31] will retry after 17.265980832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:40.764949  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:42.765551  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:45.265513  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:47.765300  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:48.895015  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:48.951679  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:48.951716  213529 retry.go:31] will retry after 21.892988656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:49.765899  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:52.265621  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:54.765488  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:56.220544  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:56.276573  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:56.276606  213529 retry.go:31] will retry after 23.018555863s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:56.765898  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:59.265629  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:01.765354  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:04.265243  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:06.265681  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:08.765326  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:46:10.845467  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:46:10.902210  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:46:10.902367  213529 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 19:46:11.265125  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:13.265908  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:15.765756  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:18.265480  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:46:19.296047  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:46:19.352095  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:46:19.352218  213529 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:46:19.354638  213529 out.go:179] * Enabled addons: 
	I1009 19:46:19.355945  213529 addons.go:514] duration metric: took 1m22.726170913s for enable addons: enabled=[]
	W1009 19:46:20.265755  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:22.765780  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:25.265698  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:27.765417  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:30.265282  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:32.765120  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:34.765843  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:37.265533  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:39.765374  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:42.265123  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:44.265770  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:46.765835  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:49.265312  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:51.765030  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:53.765470  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:56.265306  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:58.765186  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:01.265058  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:03.265774  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:05.765635  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:08.265576  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:10.765798  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:13.265761  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:15.765119  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:18.265077  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:20.265918  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:22.765962  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:25.264961  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:27.265736  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:29.765764  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:32.265610  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:34.765657  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:37.265747  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:39.765491  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:42.265404  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:44.765558  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:47.265514  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:49.765290  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:52.265293  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:54.765328  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:56.765484  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:59.265305  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:01.765149  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:03.765915  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:06.265924  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:08.765952  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:11.265908  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:13.765815  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:16.264979  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:18.765137  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:20.765889  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:23.265669  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:25.765672  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:28.265325  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:30.765046  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:32.765624  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:35.265556  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:37.265628  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:39.765520  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:42.265310  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:44.765281  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:47.265094  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:49.265648  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:51.765426  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:53.765652  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:56.264999  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:58.265225  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:00.265508  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:02.265792  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:04.765259  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:06.765636  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:09.265082  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:11.265335  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:13.765305  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:15.765755  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:18.264951  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:20.265332  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:22.265635  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:24.265896  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:26.765549  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:29.265008  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:31.265176  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:33.765225  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:35.765576  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:38.265134  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:40.265493  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:42.765451  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:44.765511  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:47.265203  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:49.765355  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:52.265333  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:54.765296  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:56.765453  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:58.765650  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:01.265098  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:03.265263  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:05.265665  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:07.765412  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:09.765500  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:11.765824  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:14.264992  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:16.265244  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:18.265292  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:20.265689  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:22.765039  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:24.765172  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:26.765326  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:29.265183  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:31.764935  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:34.265071  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:36.265451  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:38.265823  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:40.765096  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:43.264909  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:45.265266  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:47.265687  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:49.765194  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:51.765247  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:54.265134  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:56.265319  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:50:56.765195  213529 node_ready.go:38] duration metric: took 6m0.000867219s for node "ha-898615" to be "Ready" ...
	I1009 19:50:56.767874  213529 out.go:203] 
	W1009 19:50:56.769214  213529 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:50:56.769244  213529 out.go:285] * 
	* 
	W1009 19:50:56.771156  213529 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:50:56.772598  213529 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-898615 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213747,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:44:49.941874734Z",
	            "FinishedAt": "2025-10-09T19:44:48.613925122Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e470e353e06ffdcc8ac77f77b52e04dc5a3b643fb3168ea2b3827d52af8a235b",
	            "SandboxKey": "/var/run/docker/netns/e470e353e06f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:8e:9e:52:56:fb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "d184af654fb96cd2156924061667ddadda3f85161b00b7d762c0f3c72fcbe2ef",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 2 (318.600031ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node add --alsologtostderr -v 5                                                    │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node stop m02 --alsologtostderr -v 5                                               │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node start m02 --alsologtostderr -v 5                                              │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node list --alsologtostderr -v 5                                                   │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ stop    │ ha-898615 stop --alsologtostderr -v 5                                                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ ha-898615 start --wait true --alsologtostderr -v 5                                           │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ node    │ ha-898615 node list --alsologtostderr -v 5                                                   │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	│ node    │ ha-898615 node delete m03 --alsologtostderr -v 5                                             │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	│ stop    │ ha-898615 stop --alsologtostderr -v 5                                                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │ 09 Oct 25 19:44 UTC │
	│ start   │ ha-898615 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:44:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:44:49.701374  213529 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:44:49.701684  213529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:49.701694  213529 out.go:374] Setting ErrFile to fd 2...
	I1009 19:44:49.701699  213529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:49.701891  213529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:44:49.702347  213529 out.go:368] Setting JSON to false
	I1009 19:44:49.703363  213529 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5239,"bootTime":1760033851,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:44:49.703499  213529 start.go:143] virtualization: kvm guest
	I1009 19:44:49.705480  213529 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:44:49.706677  213529 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:44:49.706680  213529 notify.go:221] Checking for updates...
	I1009 19:44:49.709030  213529 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:44:49.710400  213529 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:44:49.711704  213529 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:44:49.712804  213529 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:44:49.713905  213529 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:44:49.715428  213529 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:49.715879  213529 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:44:49.737923  213529 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:44:49.738109  213529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:44:49.796426  213529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:44:49.785317755 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:44:49.796539  213529 docker.go:319] overlay module found
	I1009 19:44:49.801541  213529 out.go:179] * Using the docker driver based on existing profile
	I1009 19:44:49.802798  213529 start.go:309] selected driver: docker
	I1009 19:44:49.802817  213529 start.go:930] validating driver "docker" against &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:44:49.802903  213529 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:44:49.802989  213529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:44:49.866941  213529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:44:49.857185251 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:44:49.867781  213529 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:44:49.867825  213529 cni.go:84] Creating CNI manager for ""
	I1009 19:44:49.867876  213529 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:44:49.867941  213529 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 19:44:49.869783  213529 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:44:49.871046  213529 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:44:49.872323  213529 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:44:49.873634  213529 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:44:49.873676  213529 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:44:49.873671  213529 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:44:49.873684  213529 cache.go:58] Caching tarball of preloaded images
	I1009 19:44:49.873769  213529 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:44:49.873780  213529 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:44:49.873868  213529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:44:49.894117  213529 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:44:49.894140  213529 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:44:49.894160  213529 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:44:49.894193  213529 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:44:49.894262  213529 start.go:365] duration metric: took 46.947µs to acquireMachinesLock for "ha-898615"
	I1009 19:44:49.894284  213529 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:44:49.894295  213529 fix.go:55] fixHost starting: 
	I1009 19:44:49.894546  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:49.912866  213529 fix.go:113] recreateIfNeeded on ha-898615: state=Stopped err=<nil>
	W1009 19:44:49.912910  213529 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:44:49.914819  213529 out.go:252] * Restarting existing docker container for "ha-898615" ...
	I1009 19:44:49.914886  213529 cli_runner.go:164] Run: docker start ha-898615
	I1009 19:44:50.154621  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:50.173856  213529 kic.go:430] container "ha-898615" state is running.
	I1009 19:44:50.174272  213529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:50.192860  213529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:44:50.193122  213529 machine.go:93] provisionDockerMachine start ...
	I1009 19:44:50.193203  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:50.211807  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:50.212085  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:50.212111  213529 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:44:50.212792  213529 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45050->127.0.0.1:32793: read: connection reset by peer
	I1009 19:44:53.362882  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:44:53.362920  213529 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:44:53.363008  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:53.383229  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:53.383482  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:53.383500  213529 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:44:53.540739  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:44:53.540832  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:53.559203  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:53.559489  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:53.559515  213529 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:44:53.707903  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:44:53.707951  213529 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:44:53.707980  213529 ubuntu.go:190] setting up certificates
	I1009 19:44:53.707995  213529 provision.go:84] configureAuth start
	I1009 19:44:53.708056  213529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:53.726880  213529 provision.go:143] copyHostCerts
	I1009 19:44:53.726919  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:44:53.726954  213529 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:44:53.726969  213529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:44:53.727040  213529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:44:53.727121  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:44:53.727138  213529 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:44:53.727144  213529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:44:53.727170  213529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:44:53.727216  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:44:53.727232  213529 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:44:53.727242  213529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:44:53.727264  213529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:44:53.727314  213529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:44:53.827371  213529 provision.go:177] copyRemoteCerts
	I1009 19:44:53.827447  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:44:53.827485  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:53.846303  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:53.951136  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:44:53.951199  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:44:53.969281  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:44:53.969347  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:44:53.987249  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:44:53.987314  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:44:54.005891  213529 provision.go:87] duration metric: took 297.874582ms to configureAuth
	I1009 19:44:54.005921  213529 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:44:54.006109  213529 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:54.006224  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.024397  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:54.024626  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:54.024642  213529 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:44:54.289546  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:44:54.289573  213529 machine.go:96] duration metric: took 4.096433967s to provisionDockerMachine
	I1009 19:44:54.289589  213529 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:44:54.289601  213529 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:44:54.289664  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:44:54.289714  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.308340  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.413217  213529 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:44:54.417126  213529 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:44:54.417190  213529 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:44:54.417225  213529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:44:54.417286  213529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:44:54.417372  213529 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:44:54.417406  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:44:54.417501  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:44:54.425333  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:44:54.443854  213529 start.go:297] duration metric: took 154.246925ms for postStartSetup
	I1009 19:44:54.443940  213529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:44:54.443976  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.461915  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.563125  213529 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:44:54.568111  213529 fix.go:57] duration metric: took 4.673810177s for fixHost
	I1009 19:44:54.568142  213529 start.go:84] releasing machines lock for "ha-898615", held for 4.673868514s
	I1009 19:44:54.568206  213529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:54.586879  213529 ssh_runner.go:195] Run: cat /version.json
	I1009 19:44:54.586918  213529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:44:54.586944  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.586979  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.606718  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.607259  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.762981  213529 ssh_runner.go:195] Run: systemctl --version
	I1009 19:44:54.769817  213529 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:44:54.808737  213529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:44:54.813835  213529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:44:54.813899  213529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:44:54.822567  213529 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:44:54.822602  213529 start.go:496] detecting cgroup driver to use...
	I1009 19:44:54.822639  213529 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:44:54.822691  213529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:44:54.837558  213529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:44:54.850649  213529 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:44:54.850721  213529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:44:54.865664  213529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:44:54.878415  213529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:44:54.957542  213529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:44:55.038948  213529 docker.go:234] disabling docker service ...
	I1009 19:44:55.039033  213529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:44:55.054311  213529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:44:55.066894  213529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:44:55.146756  213529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:44:55.226886  213529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:44:55.239751  213529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:44:55.254322  213529 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:44:55.254392  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.263683  213529 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:44:55.263764  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.272570  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.281877  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.291212  213529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:44:55.299205  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.308053  213529 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.316488  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.325623  213529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:44:55.333246  213529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:44:55.340957  213529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:55.420337  213529 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:44:55.530206  213529 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:44:55.530277  213529 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:44:55.534555  213529 start.go:564] Will wait 60s for crictl version
	I1009 19:44:55.534616  213529 ssh_runner.go:195] Run: which crictl
	I1009 19:44:55.538439  213529 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:44:55.564260  213529 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:44:55.564337  213529 ssh_runner.go:195] Run: crio --version
	I1009 19:44:55.593049  213529 ssh_runner.go:195] Run: crio --version
	I1009 19:44:55.622200  213529 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:44:55.623540  213529 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:44:55.641466  213529 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:44:55.646233  213529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:44:55.657668  213529 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:44:55.657780  213529 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:44:55.657822  213529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:44:55.689903  213529 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:44:55.689929  213529 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:44:55.689989  213529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:44:55.716841  213529 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:44:55.716874  213529 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:44:55.716885  213529 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:44:55.717021  213529 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:44:55.717104  213529 ssh_runner.go:195] Run: crio config
	I1009 19:44:55.762724  213529 cni.go:84] Creating CNI manager for ""
	I1009 19:44:55.762743  213529 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:44:55.762760  213529 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:44:55.762781  213529 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:44:55.762917  213529 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:44:55.762981  213529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:44:55.771348  213529 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:44:55.771430  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:44:55.779128  213529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:44:55.792326  213529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:44:55.805801  213529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:44:55.818503  213529 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:44:55.822410  213529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:44:55.832657  213529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:55.914951  213529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:44:55.941861  213529 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:44:55.941890  213529 certs.go:195] generating shared ca certs ...
	I1009 19:44:55.941926  213529 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:55.942116  213529 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:44:55.942169  213529 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:44:55.942183  213529 certs.go:257] generating profile certs ...
	I1009 19:44:55.942287  213529 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:44:55.942359  213529 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd
	I1009 19:44:55.942424  213529 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:44:55.942440  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:44:55.942457  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:44:55.942474  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:44:55.942488  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:44:55.942501  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:44:55.942518  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:44:55.942537  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:44:55.942552  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:44:55.942619  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:44:55.942659  213529 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:44:55.942668  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:44:55.942696  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:44:55.942725  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:44:55.942757  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:44:55.942808  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:44:55.942845  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:44:55.942867  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:55.942884  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:44:55.943621  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:44:55.964066  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:44:55.983870  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:44:56.003424  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:44:56.027059  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 19:44:56.045446  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:44:56.062784  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:44:56.080346  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:44:56.098356  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:44:56.115529  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:44:56.133046  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:44:56.151123  213529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:44:56.164371  213529 ssh_runner.go:195] Run: openssl version
	I1009 19:44:56.171082  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:44:56.180682  213529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:44:56.184714  213529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:44:56.184782  213529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:44:56.219575  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:44:56.228330  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:44:56.237302  213529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:56.241163  213529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:56.241221  213529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:56.275220  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:44:56.283849  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:44:56.292853  213529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:44:56.296942  213529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:44:56.297002  213529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:44:56.331446  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:44:56.340819  213529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:44:56.344986  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:44:56.380467  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:44:56.415493  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:44:56.456227  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:44:56.501884  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:44:56.538941  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:44:56.573879  213529 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:44:56.573988  213529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:44:56.574038  213529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:44:56.602728  213529 cri.go:89] found id: ""
	I1009 19:44:56.602785  213529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:44:56.610971  213529 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:44:56.610988  213529 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:44:56.611028  213529 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:44:56.618277  213529 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:44:56.618823  213529 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:44:56.618971  213529 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-137890/kubeconfig needs updating (will repair): [kubeconfig missing "ha-898615" cluster setting kubeconfig missing "ha-898615" context setting]
	I1009 19:44:56.619299  213529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:56.619977  213529 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:44:56.620511  213529 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:44:56.620536  213529 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:44:56.620544  213529 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:44:56.620550  213529 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:44:56.620560  213529 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:44:56.620569  213529 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:44:56.621020  213529 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:44:56.628535  213529 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:44:56.628568  213529 kubeadm.go:601] duration metric: took 17.574485ms to restartPrimaryControlPlane
	I1009 19:44:56.628593  213529 kubeadm.go:402] duration metric: took 54.723918ms to StartCluster
	I1009 19:44:56.628613  213529 settings.go:142] acquiring lock: {Name:mk34466033fb866bc8e167d2d953624ad0802283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:56.628681  213529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:44:56.629423  213529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:56.629662  213529 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:44:56.629817  213529 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:56.629772  213529 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:44:56.629859  213529 addons.go:69] Setting storage-provisioner=true in profile "ha-898615"
	I1009 19:44:56.629886  213529 addons.go:238] Setting addon storage-provisioner=true in "ha-898615"
	I1009 19:44:56.629917  213529 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:44:56.629863  213529 addons.go:69] Setting default-storageclass=true in profile "ha-898615"
	I1009 19:44:56.630024  213529 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-898615"
	I1009 19:44:56.630251  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:56.630340  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:56.639537  213529 out.go:179] * Verifying Kubernetes components...
	I1009 19:44:56.640997  213529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:56.650141  213529 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:44:56.650441  213529 addons.go:238] Setting addon default-storageclass=true in "ha-898615"
	I1009 19:44:56.650481  213529 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:44:56.651083  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:56.651372  213529 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:44:56.652834  213529 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:44:56.652857  213529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:44:56.652904  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:56.672424  213529 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:44:56.672449  213529 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:44:56.672517  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:56.673443  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:56.697607  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:56.750572  213529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:44:56.764278  213529 node_ready.go:35] waiting up to 6m0s for node "ha-898615" to be "Ready" ...
	I1009 19:44:56.790989  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:44:56.807492  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:56.848417  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:56.848472  213529 retry.go:31] will retry after 181.300226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:44:56.863090  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:56.863123  213529 retry.go:31] will retry after 174.582695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.030457  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:44:57.038253  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:57.099728  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.099769  213529 retry.go:31] will retry after 488.394922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:44:57.103491  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.103516  213529 retry.go:31] will retry after 360.880737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.464716  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:57.519993  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.520038  213529 retry.go:31] will retry after 545.599641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.589293  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:57.644623  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.644657  213529 retry.go:31] will retry after 328.462818ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.973799  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:58.029584  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.029617  213529 retry.go:31] will retry after 567.831757ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.065802  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:58.119966  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.120001  213529 retry.go:31] will retry after 1.041516604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.598304  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:58.652889  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.652928  213529 retry.go:31] will retry after 716.276622ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:44:58.765698  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:44:59.162239  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:59.218055  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:59.218096  213529 retry.go:31] will retry after 1.23966397s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:59.370025  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:59.425158  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:59.425205  213529 retry.go:31] will retry after 1.359321817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:00.458325  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:00.515848  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:00.515882  213529 retry.go:31] will retry after 2.661338285s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:00.765913  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:00.785102  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:00.843571  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:00.843612  213529 retry.go:31] will retry after 2.328073619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:03.172702  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:45:03.177348  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:03.230554  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:03.230592  213529 retry.go:31] will retry after 6.157061735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:03.231964  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:03.231992  213529 retry.go:31] will retry after 2.442330177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:03.265673  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:05.674886  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:05.729807  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:05.729849  213529 retry.go:31] will retry after 3.612542584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:05.765524  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:08.265205  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:09.342654  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:45:09.388406  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:09.399682  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:09.399719  213529 retry.go:31] will retry after 6.61412336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:09.445445  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:09.445496  213529 retry.go:31] will retry after 9.139498483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:10.265494  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:12.765436  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:15.265528  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:16.014029  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:16.069677  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:16.069742  213529 retry.go:31] will retry after 11.238798751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:17.265736  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:18.585243  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:18.639573  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:18.639614  213529 retry.go:31] will retry after 11.58446266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:19.765693  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:22.265326  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:24.765252  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:27.265337  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:27.309539  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:27.366695  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:27.366733  213529 retry.go:31] will retry after 11.52939287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:29.765203  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:30.224984  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:30.281273  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:30.281310  213529 retry.go:31] will retry after 18.613032536s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:31.765443  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:34.264978  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:36.265283  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:38.265904  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:38.897369  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:38.954353  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:38.954404  213529 retry.go:31] will retry after 17.265980832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:40.764949  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:42.765551  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:45.265513  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:47.765300  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:48.895015  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:48.951679  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:48.951716  213529 retry.go:31] will retry after 21.892988656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:49.765899  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:52.265621  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:54.765488  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:56.220544  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:56.276573  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:56.276606  213529 retry.go:31] will retry after 23.018555863s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:56.765898  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:59.265629  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:01.765354  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:04.265243  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:06.265681  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:08.765326  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:46:10.845467  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:46:10.902210  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:46:10.902367  213529 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 19:46:11.265125  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:13.265908  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:15.765756  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:18.265480  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:46:19.296047  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:46:19.352095  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:46:19.352218  213529 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:46:19.354638  213529 out.go:179] * Enabled addons: 
	I1009 19:46:19.355945  213529 addons.go:514] duration metric: took 1m22.726170913s for enable addons: enabled=[]
	W1009 19:46:20.265755  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:22.765780  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:25.265698  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:27.765417  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:30.265282  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:32.765120  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:34.765843  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:37.265533  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:39.765374  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:42.265123  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:44.265770  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:46.765835  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:49.265312  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:51.765030  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:53.765470  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:56.265306  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:58.765186  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:01.265058  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:03.265774  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:05.765635  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:08.265576  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:10.765798  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:13.265761  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:15.765119  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:18.265077  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:20.265918  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:22.765962  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:25.264961  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:27.265736  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:29.765764  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:32.265610  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:34.765657  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:37.265747  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:39.765491  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:42.265404  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:44.765558  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:47.265514  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:49.765290  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:52.265293  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:54.765328  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:56.765484  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:59.265305  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:01.765149  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:03.765915  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:06.265924  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:08.765952  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:11.265908  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:13.765815  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:16.264979  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:18.765137  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:20.765889  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:23.265669  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:25.765672  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:28.265325  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:30.765046  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:32.765624  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:35.265556  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:37.265628  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:39.765520  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:42.265310  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:44.765281  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:47.265094  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:49.265648  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:51.765426  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:53.765652  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:56.264999  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:58.265225  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:00.265508  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:02.265792  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:04.765259  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:06.765636  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:09.265082  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:11.265335  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:13.765305  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:15.765755  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:18.264951  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:20.265332  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:22.265635  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:24.265896  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:26.765549  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:29.265008  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:31.265176  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:33.765225  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:35.765576  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:38.265134  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:40.265493  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:42.765451  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:44.765511  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:47.265203  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:49.765355  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:52.265333  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:54.765296  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:56.765453  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:58.765650  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:01.265098  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:03.265263  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:05.265665  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:07.765412  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:09.765500  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:11.765824  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:14.264992  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:16.265244  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:18.265292  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:20.265689  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:22.765039  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:24.765172  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:26.765326  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:29.265183  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:31.764935  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:34.265071  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:36.265451  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:38.265823  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:40.765096  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:43.264909  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:45.265266  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:47.265687  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:49.765194  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:51.765247  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:54.265134  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:56.265319  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:50:56.765195  213529 node_ready.go:38] duration metric: took 6m0.000867219s for node "ha-898615" to be "Ready" ...
	I1009 19:50:56.767874  213529 out.go:203] 
	W1009 19:50:56.769214  213529 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:50:56.769244  213529 out.go:285] * 
	W1009 19:50:56.771156  213529 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:50:56.772598  213529 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:50:51 ha-898615 crio[517]: time="2025-10-09T19:50:51.057365578Z" level=info msg="createCtr: removing container 7bbe8e9f95fa7c2a24ca731e70d1c2050e24d9896658b905bc75af699c95e2b7" id=4cf88d7e-0c00-4b22-a05f-e14968eda2fa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:51 ha-898615 crio[517]: time="2025-10-09T19:50:51.057415207Z" level=info msg="createCtr: deleting container 7bbe8e9f95fa7c2a24ca731e70d1c2050e24d9896658b905bc75af699c95e2b7 from storage" id=4cf88d7e-0c00-4b22-a05f-e14968eda2fa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:51 ha-898615 crio[517]: time="2025-10-09T19:50:51.059666218Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=4cf88d7e-0c00-4b22-a05f-e14968eda2fa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.036434515Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=b5f99731-3bfb-4b7e-9b6e-b74fc0f4378e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.037297245Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=a9c219fc-2038-444a-8d2a-c8ee3d0720cc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.038142766Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-898615/kube-scheduler" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.038360896Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.04203751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.042483314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.060338662Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.06181782Z" level=info msg="createCtr: deleting container ID 9d14890568eccae5c7353af71ccef59cb0fe1dda70f89b6799dd6a12b92042dd from idIndex" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.061852385Z" level=info msg="createCtr: removing container 9d14890568eccae5c7353af71ccef59cb0fe1dda70f89b6799dd6a12b92042dd" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.06188441Z" level=info msg="createCtr: deleting container 9d14890568eccae5c7353af71ccef59cb0fe1dda70f89b6799dd6a12b92042dd from storage" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.064105448Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-898615_kube-system_bc0e16a1814c5389485436acdfc968ed_0" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.0355101Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=0d089190-0255-4132-8944-47e3a171eec9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.036424184Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=a6914955-8723-4589-afa9-5090877fc579 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.037428452Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-898615/kube-controller-manager" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.037644021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.041015164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.041462284Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.057834228Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.059302014Z" level=info msg="createCtr: deleting container ID ce64a8d012383f65cc84c71a301fcffb36f3b33f4d99f9979ab7ca8250098045 from idIndex" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.05934333Z" level=info msg="createCtr: removing container ce64a8d012383f65cc84c71a301fcffb36f3b33f4d99f9979ab7ca8250098045" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.059392517Z" level=info msg="createCtr: deleting container ce64a8d012383f65cc84c71a301fcffb36f3b33f4d99f9979ab7ca8250098045 from storage" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.061445962Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:50:57.746752    2017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:50:57.747467    2017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:50:57.748928    2017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:50:57.750054    2017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:50:57.750768    2017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:50:57 up  1:33,  0 user,  load average: 0.00, 0.02, 0.73
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:50:51 ha-898615 kubelet[666]: E1009 19:50:51.060094     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:50:51 ha-898615 kubelet[666]:         container etcd start failed in pod etcd-ha-898615_kube-system(2d43b86ac03e93e11f35c923e2560103): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:50:51 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:50:51 ha-898615 kubelet[666]: E1009 19:50:51.060138     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-898615" podUID="2d43b86ac03e93e11f35c923e2560103"
	Oct 09 19:50:51 ha-898615 kubelet[666]: E1009 19:50:51.681089     666 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:50:51 ha-898615 kubelet[666]: I1009 19:50:51.858858     666 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:50:51 ha-898615 kubelet[666]: E1009 19:50:51.859301     666 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:50:53 ha-898615 kubelet[666]: E1009 19:50:53.035941     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:50:53 ha-898615 kubelet[666]: E1009 19:50:53.064430     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:50:53 ha-898615 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:50:53 ha-898615 kubelet[666]:  > podSandboxID="829862355c0892a10f586a11617b0eee63c8b9aa21bbf84935814681a67803f6"
	Oct 09 19:50:53 ha-898615 kubelet[666]: E1009 19:50:53.064534     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:50:53 ha-898615 kubelet[666]:         container kube-scheduler start failed in pod kube-scheduler-ha-898615_kube-system(bc0e16a1814c5389485436acdfc968ed): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:50:53 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:50:53 ha-898615 kubelet[666]: E1009 19:50:53.064571     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-898615" podUID="bc0e16a1814c5389485436acdfc968ed"
	Oct 09 19:50:55 ha-898615 kubelet[666]: E1009 19:50:55.842360     666 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.035079     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.051793     666 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.061763     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:50:56 ha-898615 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:50:56 ha-898615 kubelet[666]:  > podSandboxID="c69416833813892406432d22789fcb941cf442d503fc8a7a72d459c819b42203"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.061885     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:50:56 ha-898615 kubelet[666]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:50:56 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.061937     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 2 (312.559936ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (368.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-898615" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-898615\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-898615\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-898615\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213747,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:44:49.941874734Z",
	            "FinishedAt": "2025-10-09T19:44:48.613925122Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e470e353e06ffdcc8ac77f77b52e04dc5a3b643fb3168ea2b3827d52af8a235b",
	            "SandboxKey": "/var/run/docker/netns/e470e353e06f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:8e:9e:52:56:fb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "d184af654fb96cd2156924061667ddadda3f85161b00b7d762c0f3c72fcbe2ef",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 2 (311.726698ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node add --alsologtostderr -v 5                                                    │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node stop m02 --alsologtostderr -v 5                                               │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node start m02 --alsologtostderr -v 5                                              │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node list --alsologtostderr -v 5                                                   │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ stop    │ ha-898615 stop --alsologtostderr -v 5                                                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ ha-898615 start --wait true --alsologtostderr -v 5                                           │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ node    │ ha-898615 node list --alsologtostderr -v 5                                                   │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	│ node    │ ha-898615 node delete m03 --alsologtostderr -v 5                                             │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	│ stop    │ ha-898615 stop --alsologtostderr -v 5                                                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │ 09 Oct 25 19:44 UTC │
	│ start   │ ha-898615 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:44:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:44:49.701374  213529 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:44:49.701684  213529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:49.701694  213529 out.go:374] Setting ErrFile to fd 2...
	I1009 19:44:49.701699  213529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:49.701891  213529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:44:49.702347  213529 out.go:368] Setting JSON to false
	I1009 19:44:49.703363  213529 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5239,"bootTime":1760033851,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:44:49.703499  213529 start.go:143] virtualization: kvm guest
	I1009 19:44:49.705480  213529 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:44:49.706677  213529 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:44:49.706680  213529 notify.go:221] Checking for updates...
	I1009 19:44:49.709030  213529 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:44:49.710400  213529 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:44:49.711704  213529 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:44:49.712804  213529 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:44:49.713905  213529 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:44:49.715428  213529 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:49.715879  213529 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:44:49.737923  213529 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:44:49.738109  213529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:44:49.796426  213529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:44:49.785317755 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:44:49.796539  213529 docker.go:319] overlay module found
	I1009 19:44:49.801541  213529 out.go:179] * Using the docker driver based on existing profile
	I1009 19:44:49.802798  213529 start.go:309] selected driver: docker
	I1009 19:44:49.802817  213529 start.go:930] validating driver "docker" against &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:44:49.802903  213529 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:44:49.802989  213529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:44:49.866941  213529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:44:49.857185251 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:44:49.867781  213529 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:44:49.867825  213529 cni.go:84] Creating CNI manager for ""
	I1009 19:44:49.867876  213529 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:44:49.867941  213529 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 19:44:49.869783  213529 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:44:49.871046  213529 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:44:49.872323  213529 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:44:49.873634  213529 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:44:49.873676  213529 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:44:49.873671  213529 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:44:49.873684  213529 cache.go:58] Caching tarball of preloaded images
	I1009 19:44:49.873769  213529 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:44:49.873780  213529 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:44:49.873868  213529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:44:49.894117  213529 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:44:49.894140  213529 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:44:49.894160  213529 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:44:49.894193  213529 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:44:49.894262  213529 start.go:365] duration metric: took 46.947µs to acquireMachinesLock for "ha-898615"
	I1009 19:44:49.894284  213529 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:44:49.894295  213529 fix.go:55] fixHost starting: 
	I1009 19:44:49.894546  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:49.912866  213529 fix.go:113] recreateIfNeeded on ha-898615: state=Stopped err=<nil>
	W1009 19:44:49.912910  213529 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:44:49.914819  213529 out.go:252] * Restarting existing docker container for "ha-898615" ...
	I1009 19:44:49.914886  213529 cli_runner.go:164] Run: docker start ha-898615
	I1009 19:44:50.154621  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:50.173856  213529 kic.go:430] container "ha-898615" state is running.
	I1009 19:44:50.174272  213529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:50.192860  213529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:44:50.193122  213529 machine.go:93] provisionDockerMachine start ...
	I1009 19:44:50.193203  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:50.211807  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:50.212085  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:50.212111  213529 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:44:50.212792  213529 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45050->127.0.0.1:32793: read: connection reset by peer
	I1009 19:44:53.362882  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:44:53.362920  213529 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:44:53.363008  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:53.383229  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:53.383482  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:53.383500  213529 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:44:53.540739  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:44:53.540832  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:53.559203  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:53.559489  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:53.559515  213529 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:44:53.707903  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:44:53.707951  213529 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:44:53.707980  213529 ubuntu.go:190] setting up certificates
	I1009 19:44:53.707995  213529 provision.go:84] configureAuth start
	I1009 19:44:53.708056  213529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:53.726880  213529 provision.go:143] copyHostCerts
	I1009 19:44:53.726919  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:44:53.726954  213529 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:44:53.726969  213529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:44:53.727040  213529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:44:53.727121  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:44:53.727138  213529 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:44:53.727144  213529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:44:53.727170  213529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:44:53.727216  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:44:53.727232  213529 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:44:53.727242  213529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:44:53.727264  213529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:44:53.727314  213529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:44:53.827371  213529 provision.go:177] copyRemoteCerts
	I1009 19:44:53.827447  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:44:53.827485  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:53.846303  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:53.951136  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:44:53.951199  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:44:53.969281  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:44:53.969347  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:44:53.987249  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:44:53.987314  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:44:54.005891  213529 provision.go:87] duration metric: took 297.874582ms to configureAuth
	I1009 19:44:54.005921  213529 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:44:54.006109  213529 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:54.006224  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.024397  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:54.024626  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:54.024642  213529 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:44:54.289546  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:44:54.289573  213529 machine.go:96] duration metric: took 4.096433967s to provisionDockerMachine
	I1009 19:44:54.289589  213529 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:44:54.289601  213529 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:44:54.289664  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:44:54.289714  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.308340  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.413217  213529 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:44:54.417126  213529 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:44:54.417190  213529 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:44:54.417225  213529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:44:54.417286  213529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:44:54.417372  213529 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:44:54.417406  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:44:54.417501  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:44:54.425333  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:44:54.443854  213529 start.go:297] duration metric: took 154.246925ms for postStartSetup
	I1009 19:44:54.443940  213529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:44:54.443976  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.461915  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.563125  213529 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:44:54.568111  213529 fix.go:57] duration metric: took 4.673810177s for fixHost
	I1009 19:44:54.568142  213529 start.go:84] releasing machines lock for "ha-898615", held for 4.673868514s
	I1009 19:44:54.568206  213529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:54.586879  213529 ssh_runner.go:195] Run: cat /version.json
	I1009 19:44:54.586918  213529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:44:54.586944  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.586979  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.606718  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.607259  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.762981  213529 ssh_runner.go:195] Run: systemctl --version
	I1009 19:44:54.769817  213529 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:44:54.808737  213529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:44:54.813835  213529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:44:54.813899  213529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:44:54.822567  213529 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:44:54.822602  213529 start.go:496] detecting cgroup driver to use...
	I1009 19:44:54.822639  213529 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:44:54.822691  213529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:44:54.837558  213529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:44:54.850649  213529 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:44:54.850721  213529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:44:54.865664  213529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:44:54.878415  213529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:44:54.957542  213529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:44:55.038948  213529 docker.go:234] disabling docker service ...
	I1009 19:44:55.039033  213529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:44:55.054311  213529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:44:55.066894  213529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:44:55.146756  213529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:44:55.226886  213529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:44:55.239751  213529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:44:55.254322  213529 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:44:55.254392  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.263683  213529 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:44:55.263764  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.272570  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.281877  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.291212  213529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:44:55.299205  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.308053  213529 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.316488  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.325623  213529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:44:55.333246  213529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:44:55.340957  213529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:55.420337  213529 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:44:55.530206  213529 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:44:55.530277  213529 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:44:55.534555  213529 start.go:564] Will wait 60s for crictl version
	I1009 19:44:55.534616  213529 ssh_runner.go:195] Run: which crictl
	I1009 19:44:55.538439  213529 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:44:55.564260  213529 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:44:55.564337  213529 ssh_runner.go:195] Run: crio --version
	I1009 19:44:55.593049  213529 ssh_runner.go:195] Run: crio --version
	I1009 19:44:55.622200  213529 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:44:55.623540  213529 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:44:55.641466  213529 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:44:55.646233  213529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:44:55.657668  213529 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:44:55.657780  213529 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:44:55.657822  213529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:44:55.689903  213529 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:44:55.689929  213529 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:44:55.689989  213529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:44:55.716841  213529 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:44:55.716874  213529 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:44:55.716885  213529 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:44:55.717021  213529 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:44:55.717104  213529 ssh_runner.go:195] Run: crio config
	I1009 19:44:55.762724  213529 cni.go:84] Creating CNI manager for ""
	I1009 19:44:55.762743  213529 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:44:55.762760  213529 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:44:55.762781  213529 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:44:55.762917  213529 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:44:55.762981  213529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:44:55.771348  213529 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:44:55.771430  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:44:55.779128  213529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:44:55.792326  213529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:44:55.805801  213529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:44:55.818503  213529 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:44:55.822410  213529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:44:55.832657  213529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:55.914951  213529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:44:55.941861  213529 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:44:55.941890  213529 certs.go:195] generating shared ca certs ...
	I1009 19:44:55.941926  213529 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:55.942116  213529 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:44:55.942169  213529 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:44:55.942183  213529 certs.go:257] generating profile certs ...
	I1009 19:44:55.942287  213529 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:44:55.942359  213529 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd
	I1009 19:44:55.942424  213529 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:44:55.942440  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:44:55.942457  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:44:55.942474  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:44:55.942488  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:44:55.942501  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:44:55.942518  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:44:55.942537  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:44:55.942552  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:44:55.942619  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:44:55.942659  213529 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:44:55.942668  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:44:55.942696  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:44:55.942725  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:44:55.942757  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:44:55.942808  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:44:55.942845  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:44:55.942867  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:55.942884  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:44:55.943621  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:44:55.964066  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:44:55.983870  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:44:56.003424  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:44:56.027059  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 19:44:56.045446  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:44:56.062784  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:44:56.080346  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:44:56.098356  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:44:56.115529  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:44:56.133046  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:44:56.151123  213529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:44:56.164371  213529 ssh_runner.go:195] Run: openssl version
	I1009 19:44:56.171082  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:44:56.180682  213529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:44:56.184714  213529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:44:56.184782  213529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:44:56.219575  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:44:56.228330  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:44:56.237302  213529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:56.241163  213529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:56.241221  213529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:56.275220  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:44:56.283849  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:44:56.292853  213529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:44:56.296942  213529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:44:56.297002  213529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:44:56.331446  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:44:56.340819  213529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:44:56.344986  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:44:56.380467  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:44:56.415493  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:44:56.456227  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:44:56.501884  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:44:56.538941  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:44:56.573879  213529 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:44:56.573988  213529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:44:56.574038  213529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:44:56.602728  213529 cri.go:89] found id: ""
	I1009 19:44:56.602785  213529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:44:56.610971  213529 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:44:56.610988  213529 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:44:56.611028  213529 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:44:56.618277  213529 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:44:56.618823  213529 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:44:56.618971  213529 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-137890/kubeconfig needs updating (will repair): [kubeconfig missing "ha-898615" cluster setting kubeconfig missing "ha-898615" context setting]
	I1009 19:44:56.619299  213529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:56.619977  213529 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:44:56.620511  213529 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:44:56.620536  213529 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:44:56.620544  213529 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:44:56.620550  213529 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:44:56.620560  213529 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:44:56.620569  213529 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:44:56.621020  213529 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:44:56.628535  213529 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:44:56.628568  213529 kubeadm.go:601] duration metric: took 17.574485ms to restartPrimaryControlPlane
	I1009 19:44:56.628593  213529 kubeadm.go:402] duration metric: took 54.723918ms to StartCluster
	I1009 19:44:56.628613  213529 settings.go:142] acquiring lock: {Name:mk34466033fb866bc8e167d2d953624ad0802283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:56.628681  213529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:44:56.629423  213529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:56.629662  213529 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:44:56.629817  213529 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:56.629772  213529 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:44:56.629859  213529 addons.go:69] Setting storage-provisioner=true in profile "ha-898615"
	I1009 19:44:56.629886  213529 addons.go:238] Setting addon storage-provisioner=true in "ha-898615"
	I1009 19:44:56.629917  213529 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:44:56.629863  213529 addons.go:69] Setting default-storageclass=true in profile "ha-898615"
	I1009 19:44:56.630024  213529 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-898615"
	I1009 19:44:56.630251  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:56.630340  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:56.639537  213529 out.go:179] * Verifying Kubernetes components...
	I1009 19:44:56.640997  213529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:56.650141  213529 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:44:56.650441  213529 addons.go:238] Setting addon default-storageclass=true in "ha-898615"
	I1009 19:44:56.650481  213529 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:44:56.651083  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:56.651372  213529 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:44:56.652834  213529 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:44:56.652857  213529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:44:56.652904  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:56.672424  213529 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:44:56.672449  213529 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:44:56.672517  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:56.673443  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:56.697607  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:56.750572  213529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:44:56.764278  213529 node_ready.go:35] waiting up to 6m0s for node "ha-898615" to be "Ready" ...
	I1009 19:44:56.790989  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:44:56.807492  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:56.848417  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:56.848472  213529 retry.go:31] will retry after 181.300226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:44:56.863090  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:56.863123  213529 retry.go:31] will retry after 174.582695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.030457  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:44:57.038253  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:57.099728  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.099769  213529 retry.go:31] will retry after 488.394922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:44:57.103491  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.103516  213529 retry.go:31] will retry after 360.880737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.464716  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:57.519993  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.520038  213529 retry.go:31] will retry after 545.599641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.589293  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:57.644623  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.644657  213529 retry.go:31] will retry after 328.462818ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.973799  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:58.029584  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.029617  213529 retry.go:31] will retry after 567.831757ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.065802  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:58.119966  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.120001  213529 retry.go:31] will retry after 1.041516604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.598304  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:58.652889  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.652928  213529 retry.go:31] will retry after 716.276622ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:44:58.765698  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:44:59.162239  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:59.218055  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:59.218096  213529 retry.go:31] will retry after 1.23966397s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:59.370025  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:59.425158  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:59.425205  213529 retry.go:31] will retry after 1.359321817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:00.458325  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:00.515848  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:00.515882  213529 retry.go:31] will retry after 2.661338285s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:00.765913  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:00.785102  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:00.843571  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:00.843612  213529 retry.go:31] will retry after 2.328073619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:03.172702  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:45:03.177348  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:03.230554  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:03.230592  213529 retry.go:31] will retry after 6.157061735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:03.231964  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:03.231992  213529 retry.go:31] will retry after 2.442330177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:03.265673  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:05.674886  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:05.729807  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:05.729849  213529 retry.go:31] will retry after 3.612542584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:05.765524  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:08.265205  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:09.342654  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:45:09.388406  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:09.399682  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:09.399719  213529 retry.go:31] will retry after 6.61412336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:09.445445  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:09.445496  213529 retry.go:31] will retry after 9.139498483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:10.265494  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:12.765436  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:15.265528  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:16.014029  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:16.069677  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:16.069742  213529 retry.go:31] will retry after 11.238798751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:17.265736  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:18.585243  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:18.639573  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:18.639614  213529 retry.go:31] will retry after 11.58446266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:19.765693  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:22.265326  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:24.765252  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:27.265337  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:27.309539  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:27.366695  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:27.366733  213529 retry.go:31] will retry after 11.52939287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:29.765203  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:30.224984  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:30.281273  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:30.281310  213529 retry.go:31] will retry after 18.613032536s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:31.765443  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:34.264978  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:36.265283  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:38.265904  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:38.897369  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:38.954353  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:38.954404  213529 retry.go:31] will retry after 17.265980832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:40.764949  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:42.765551  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:45.265513  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:47.765300  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:48.895015  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:48.951679  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:48.951716  213529 retry.go:31] will retry after 21.892988656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:49.765899  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:52.265621  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:54.765488  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:56.220544  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:56.276573  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:56.276606  213529 retry.go:31] will retry after 23.018555863s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:56.765898  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:59.265629  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:01.765354  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:04.265243  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:06.265681  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:08.765326  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:46:10.845467  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:46:10.902210  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:46:10.902367  213529 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 19:46:11.265125  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:13.265908  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:15.765756  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:18.265480  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:46:19.296047  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:46:19.352095  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:46:19.352218  213529 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:46:19.354638  213529 out.go:179] * Enabled addons: 
	I1009 19:46:19.355945  213529 addons.go:514] duration metric: took 1m22.726170913s for enable addons: enabled=[]
	W1009 19:46:20.265755  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:22.765780  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:25.265698  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:27.765417  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:30.265282  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:32.765120  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:34.765843  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:37.265533  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:39.765374  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:42.265123  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:44.265770  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:46.765835  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:49.265312  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:51.765030  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:53.765470  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:56.265306  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:58.765186  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:01.265058  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:03.265774  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:05.765635  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:08.265576  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:10.765798  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:13.265761  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:15.765119  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:18.265077  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:20.265918  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:22.765962  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:25.264961  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:27.265736  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:29.765764  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:32.265610  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:34.765657  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:37.265747  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:39.765491  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:42.265404  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:44.765558  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:47.265514  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:49.765290  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:52.265293  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:54.765328  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:56.765484  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:59.265305  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:01.765149  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:03.765915  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:06.265924  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:08.765952  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:11.265908  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:13.765815  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:16.264979  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:18.765137  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:20.765889  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:23.265669  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:25.765672  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:28.265325  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:30.765046  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:32.765624  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:35.265556  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:37.265628  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:39.765520  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:42.265310  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:44.765281  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:47.265094  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:49.265648  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:51.765426  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:53.765652  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:56.264999  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:58.265225  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:00.265508  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:02.265792  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:04.765259  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:06.765636  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:09.265082  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:11.265335  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:13.765305  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:15.765755  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:18.264951  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:20.265332  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:22.265635  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:24.265896  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:26.765549  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:29.265008  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:31.265176  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:33.765225  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:35.765576  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:38.265134  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:40.265493  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:42.765451  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:44.765511  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:47.265203  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:49.765355  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:52.265333  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:54.765296  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:56.765453  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:58.765650  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:01.265098  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:03.265263  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:05.265665  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:07.765412  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:09.765500  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:11.765824  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:14.264992  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:16.265244  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:18.265292  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:20.265689  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:22.765039  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:24.765172  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:26.765326  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:29.265183  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:31.764935  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:34.265071  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:36.265451  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:38.265823  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:40.765096  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:43.264909  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:45.265266  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:47.265687  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:49.765194  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:51.765247  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:54.265134  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:56.265319  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:50:56.765195  213529 node_ready.go:38] duration metric: took 6m0.000867219s for node "ha-898615" to be "Ready" ...
	I1009 19:50:56.767874  213529 out.go:203] 
	W1009 19:50:56.769214  213529 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:50:56.769244  213529 out.go:285] * 
	W1009 19:50:56.771156  213529 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:50:56.772598  213529 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:50:51 ha-898615 crio[517]: time="2025-10-09T19:50:51.057365578Z" level=info msg="createCtr: removing container 7bbe8e9f95fa7c2a24ca731e70d1c2050e24d9896658b905bc75af699c95e2b7" id=4cf88d7e-0c00-4b22-a05f-e14968eda2fa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:51 ha-898615 crio[517]: time="2025-10-09T19:50:51.057415207Z" level=info msg="createCtr: deleting container 7bbe8e9f95fa7c2a24ca731e70d1c2050e24d9896658b905bc75af699c95e2b7 from storage" id=4cf88d7e-0c00-4b22-a05f-e14968eda2fa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:51 ha-898615 crio[517]: time="2025-10-09T19:50:51.059666218Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=4cf88d7e-0c00-4b22-a05f-e14968eda2fa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.036434515Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=b5f99731-3bfb-4b7e-9b6e-b74fc0f4378e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.037297245Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=a9c219fc-2038-444a-8d2a-c8ee3d0720cc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.038142766Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-898615/kube-scheduler" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.038360896Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.04203751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.042483314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.060338662Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.06181782Z" level=info msg="createCtr: deleting container ID 9d14890568eccae5c7353af71ccef59cb0fe1dda70f89b6799dd6a12b92042dd from idIndex" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.061852385Z" level=info msg="createCtr: removing container 9d14890568eccae5c7353af71ccef59cb0fe1dda70f89b6799dd6a12b92042dd" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.06188441Z" level=info msg="createCtr: deleting container 9d14890568eccae5c7353af71ccef59cb0fe1dda70f89b6799dd6a12b92042dd from storage" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.064105448Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-898615_kube-system_bc0e16a1814c5389485436acdfc968ed_0" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.0355101Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=0d089190-0255-4132-8944-47e3a171eec9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.036424184Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=a6914955-8723-4589-afa9-5090877fc579 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.037428452Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-898615/kube-controller-manager" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.037644021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.041015164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.041462284Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.057834228Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.059302014Z" level=info msg="createCtr: deleting container ID ce64a8d012383f65cc84c71a301fcffb36f3b33f4d99f9979ab7ca8250098045 from idIndex" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.05934333Z" level=info msg="createCtr: removing container ce64a8d012383f65cc84c71a301fcffb36f3b33f4d99f9979ab7ca8250098045" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.059392517Z" level=info msg="createCtr: deleting container ce64a8d012383f65cc84c71a301fcffb36f3b33f4d99f9979ab7ca8250098045 from storage" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.061445962Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:50:59.417933    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:50:59.418561    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:50:59.420224    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:50:59.420811    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:50:59.422402    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:50:59 up  1:33,  0 user,  load average: 0.24, 0.07, 0.74
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:50:51 ha-898615 kubelet[666]: E1009 19:50:51.681089     666 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:50:51 ha-898615 kubelet[666]: I1009 19:50:51.858858     666 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:50:51 ha-898615 kubelet[666]: E1009 19:50:51.859301     666 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:50:53 ha-898615 kubelet[666]: E1009 19:50:53.035941     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:50:53 ha-898615 kubelet[666]: E1009 19:50:53.064430     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:50:53 ha-898615 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:50:53 ha-898615 kubelet[666]:  > podSandboxID="829862355c0892a10f586a11617b0eee63c8b9aa21bbf84935814681a67803f6"
	Oct 09 19:50:53 ha-898615 kubelet[666]: E1009 19:50:53.064534     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:50:53 ha-898615 kubelet[666]:         container kube-scheduler start failed in pod kube-scheduler-ha-898615_kube-system(bc0e16a1814c5389485436acdfc968ed): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:50:53 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:50:53 ha-898615 kubelet[666]: E1009 19:50:53.064571     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-898615" podUID="bc0e16a1814c5389485436acdfc968ed"
	Oct 09 19:50:55 ha-898615 kubelet[666]: E1009 19:50:55.842360     666 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.035079     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.051793     666 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.061763     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:50:56 ha-898615 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:50:56 ha-898615 kubelet[666]:  > podSandboxID="c69416833813892406432d22789fcb941cf442d503fc8a7a72d459c819b42203"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.061885     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:50:56 ha-898615 kubelet[666]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:50:56 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.061937     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	Oct 09 19:50:58 ha-898615 kubelet[666]: E1009 19:50:58.083759     666 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-898615.186cea3b954fcad9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-898615,UID:ha-898615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-898615 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-898615,},FirstTimestamp:2025-10-09 19:44:56.024025817 +0000 UTC m=+0.079267356,LastTimestamp:2025-10-09 19:44:56.024025817 +0000 UTC m=+0.079267356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-898615,}"
	Oct 09 19:50:58 ha-898615 kubelet[666]: E1009 19:50:58.682082     666 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:50:58 ha-898615 kubelet[666]: I1009 19:50:58.861025     666 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:50:58 ha-898615 kubelet[666]: E1009 19:50:58.861511     666 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 2 (310.643929ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (1.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-898615 node add --control-plane --alsologtostderr -v 5: exit status 103 (263.347659ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-898615 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-898615"

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:50:59.874581  218209 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:50:59.874842  218209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:50:59.874851  218209 out.go:374] Setting ErrFile to fd 2...
	I1009 19:50:59.874856  218209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:50:59.875055  218209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:50:59.875350  218209 mustload.go:65] Loading cluster: ha-898615
	I1009 19:50:59.875727  218209 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:50:59.876105  218209 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:50:59.894371  218209 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:50:59.894666  218209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:50:59.957177  218209 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:50:59.946389918 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:50:59.957323  218209 api_server.go:166] Checking apiserver status ...
	I1009 19:50:59.957397  218209 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:50:59.957450  218209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:50:59.975872  218209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	W1009 19:51:00.082913  218209 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:51:00.084911  218209 out.go:179] * The control-plane node ha-898615 apiserver is not running: (state=Stopped)
	I1009 19:51:00.086520  218209 out.go:179]   To start a cluster, run: "minikube start -p ha-898615"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-898615 node add --control-plane --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213747,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:44:49.941874734Z",
	            "FinishedAt": "2025-10-09T19:44:48.613925122Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e470e353e06ffdcc8ac77f77b52e04dc5a3b643fb3168ea2b3827d52af8a235b",
	            "SandboxKey": "/var/run/docker/netns/e470e353e06f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:8e:9e:52:56:fb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "d184af654fb96cd2156924061667ddadda3f85161b00b7d762c0f3c72fcbe2ef",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 2 (310.306031ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node add --alsologtostderr -v 5                                                    │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node stop m02 --alsologtostderr -v 5                                               │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node start m02 --alsologtostderr -v 5                                              │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node list --alsologtostderr -v 5                                                   │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ stop    │ ha-898615 stop --alsologtostderr -v 5                                                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ ha-898615 start --wait true --alsologtostderr -v 5                                           │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ node    │ ha-898615 node list --alsologtostderr -v 5                                                   │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	│ node    │ ha-898615 node delete m03 --alsologtostderr -v 5                                             │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	│ stop    │ ha-898615 stop --alsologtostderr -v 5                                                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │ 09 Oct 25 19:44 UTC │
	│ start   │ ha-898615 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	│ node    │ ha-898615 node add --control-plane --alsologtostderr -v 5                                    │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:50 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:44:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:44:49.701374  213529 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:44:49.701684  213529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:49.701694  213529 out.go:374] Setting ErrFile to fd 2...
	I1009 19:44:49.701699  213529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:49.701891  213529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:44:49.702347  213529 out.go:368] Setting JSON to false
	I1009 19:44:49.703363  213529 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5239,"bootTime":1760033851,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:44:49.703499  213529 start.go:143] virtualization: kvm guest
	I1009 19:44:49.705480  213529 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:44:49.706677  213529 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:44:49.706680  213529 notify.go:221] Checking for updates...
	I1009 19:44:49.709030  213529 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:44:49.710400  213529 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:44:49.711704  213529 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:44:49.712804  213529 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:44:49.713905  213529 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:44:49.715428  213529 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:49.715879  213529 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:44:49.737923  213529 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:44:49.738109  213529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:44:49.796426  213529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:44:49.785317755 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:44:49.796539  213529 docker.go:319] overlay module found
	I1009 19:44:49.801541  213529 out.go:179] * Using the docker driver based on existing profile
	I1009 19:44:49.802798  213529 start.go:309] selected driver: docker
	I1009 19:44:49.802817  213529 start.go:930] validating driver "docker" against &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:44:49.802903  213529 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:44:49.802989  213529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:44:49.866941  213529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:44:49.857185251 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:44:49.867781  213529 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:44:49.867825  213529 cni.go:84] Creating CNI manager for ""
	I1009 19:44:49.867876  213529 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:44:49.867941  213529 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 19:44:49.869783  213529 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:44:49.871046  213529 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:44:49.872323  213529 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:44:49.873634  213529 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:44:49.873676  213529 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:44:49.873671  213529 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:44:49.873684  213529 cache.go:58] Caching tarball of preloaded images
	I1009 19:44:49.873769  213529 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:44:49.873780  213529 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:44:49.873868  213529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:44:49.894117  213529 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:44:49.894140  213529 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:44:49.894160  213529 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:44:49.894193  213529 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:44:49.894262  213529 start.go:365] duration metric: took 46.947µs to acquireMachinesLock for "ha-898615"
	I1009 19:44:49.894284  213529 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:44:49.894295  213529 fix.go:55] fixHost starting: 
	I1009 19:44:49.894546  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:49.912866  213529 fix.go:113] recreateIfNeeded on ha-898615: state=Stopped err=<nil>
	W1009 19:44:49.912910  213529 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:44:49.914819  213529 out.go:252] * Restarting existing docker container for "ha-898615" ...
	I1009 19:44:49.914886  213529 cli_runner.go:164] Run: docker start ha-898615
	I1009 19:44:50.154621  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:50.173856  213529 kic.go:430] container "ha-898615" state is running.
	I1009 19:44:50.174272  213529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:50.192860  213529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:44:50.193122  213529 machine.go:93] provisionDockerMachine start ...
	I1009 19:44:50.193203  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:50.211807  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:50.212085  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:50.212111  213529 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:44:50.212792  213529 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45050->127.0.0.1:32793: read: connection reset by peer
	I1009 19:44:53.362882  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:44:53.362920  213529 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:44:53.363008  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:53.383229  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:53.383482  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:53.383500  213529 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:44:53.540739  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:44:53.540832  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:53.559203  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:53.559489  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:53.559515  213529 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:44:53.707903  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:44:53.707951  213529 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:44:53.707980  213529 ubuntu.go:190] setting up certificates
	I1009 19:44:53.707995  213529 provision.go:84] configureAuth start
	I1009 19:44:53.708056  213529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:53.726880  213529 provision.go:143] copyHostCerts
	I1009 19:44:53.726919  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:44:53.726954  213529 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:44:53.726969  213529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:44:53.727040  213529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:44:53.727121  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:44:53.727138  213529 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:44:53.727144  213529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:44:53.727170  213529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:44:53.727216  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:44:53.727232  213529 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:44:53.727242  213529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:44:53.727264  213529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:44:53.727314  213529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:44:53.827371  213529 provision.go:177] copyRemoteCerts
	I1009 19:44:53.827447  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:44:53.827485  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:53.846303  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:53.951136  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:44:53.951199  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:44:53.969281  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:44:53.969347  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:44:53.987249  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:44:53.987314  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:44:54.005891  213529 provision.go:87] duration metric: took 297.874582ms to configureAuth
	I1009 19:44:54.005921  213529 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:44:54.006109  213529 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:54.006224  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.024397  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:54.024626  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:54.024642  213529 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:44:54.289546  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:44:54.289573  213529 machine.go:96] duration metric: took 4.096433967s to provisionDockerMachine
	I1009 19:44:54.289589  213529 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:44:54.289601  213529 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:44:54.289664  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:44:54.289714  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.308340  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.413217  213529 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:44:54.417126  213529 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:44:54.417190  213529 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:44:54.417225  213529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:44:54.417286  213529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:44:54.417372  213529 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:44:54.417406  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:44:54.417501  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:44:54.425333  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:44:54.443854  213529 start.go:297] duration metric: took 154.246925ms for postStartSetup
	I1009 19:44:54.443940  213529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:44:54.443976  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.461915  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.563125  213529 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:44:54.568111  213529 fix.go:57] duration metric: took 4.673810177s for fixHost
	I1009 19:44:54.568142  213529 start.go:84] releasing machines lock for "ha-898615", held for 4.673868514s
	I1009 19:44:54.568206  213529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:54.586879  213529 ssh_runner.go:195] Run: cat /version.json
	I1009 19:44:54.586918  213529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:44:54.586944  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.586979  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.606718  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.607259  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.762981  213529 ssh_runner.go:195] Run: systemctl --version
	I1009 19:44:54.769817  213529 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:44:54.808737  213529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:44:54.813835  213529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:44:54.813899  213529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:44:54.822567  213529 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:44:54.822602  213529 start.go:496] detecting cgroup driver to use...
	I1009 19:44:54.822639  213529 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:44:54.822691  213529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:44:54.837558  213529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:44:54.850649  213529 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:44:54.850721  213529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:44:54.865664  213529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:44:54.878415  213529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:44:54.957542  213529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:44:55.038948  213529 docker.go:234] disabling docker service ...
	I1009 19:44:55.039033  213529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:44:55.054311  213529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:44:55.066894  213529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:44:55.146756  213529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:44:55.226886  213529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:44:55.239751  213529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:44:55.254322  213529 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:44:55.254392  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.263683  213529 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:44:55.263764  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.272570  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.281877  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.291212  213529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:44:55.299205  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.308053  213529 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.316488  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.325623  213529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:44:55.333246  213529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:44:55.340957  213529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:55.420337  213529 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:44:55.530206  213529 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:44:55.530277  213529 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:44:55.534555  213529 start.go:564] Will wait 60s for crictl version
	I1009 19:44:55.534616  213529 ssh_runner.go:195] Run: which crictl
	I1009 19:44:55.538439  213529 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:44:55.564260  213529 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:44:55.564337  213529 ssh_runner.go:195] Run: crio --version
	I1009 19:44:55.593049  213529 ssh_runner.go:195] Run: crio --version
	I1009 19:44:55.622200  213529 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:44:55.623540  213529 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:44:55.641466  213529 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:44:55.646233  213529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:44:55.657668  213529 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:44:55.657780  213529 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:44:55.657822  213529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:44:55.689903  213529 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:44:55.689929  213529 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:44:55.689989  213529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:44:55.716841  213529 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:44:55.716874  213529 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:44:55.716885  213529 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:44:55.717021  213529 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:44:55.717104  213529 ssh_runner.go:195] Run: crio config
	I1009 19:44:55.762724  213529 cni.go:84] Creating CNI manager for ""
	I1009 19:44:55.762743  213529 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:44:55.762760  213529 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:44:55.762781  213529 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:44:55.762917  213529 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:44:55.762981  213529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:44:55.771348  213529 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:44:55.771430  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:44:55.779128  213529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:44:55.792326  213529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:44:55.805801  213529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:44:55.818503  213529 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:44:55.822410  213529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:44:55.832657  213529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:55.914951  213529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:44:55.941861  213529 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:44:55.941890  213529 certs.go:195] generating shared ca certs ...
	I1009 19:44:55.941926  213529 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:55.942116  213529 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:44:55.942169  213529 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:44:55.942183  213529 certs.go:257] generating profile certs ...
	I1009 19:44:55.942287  213529 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:44:55.942359  213529 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd
	I1009 19:44:55.942424  213529 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:44:55.942440  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:44:55.942457  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:44:55.942474  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:44:55.942488  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:44:55.942501  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:44:55.942518  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:44:55.942537  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:44:55.942552  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:44:55.942619  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:44:55.942659  213529 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:44:55.942668  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:44:55.942696  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:44:55.942725  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:44:55.942757  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:44:55.942808  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:44:55.942845  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:44:55.942867  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:55.942884  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:44:55.943621  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:44:55.964066  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:44:55.983870  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:44:56.003424  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:44:56.027059  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 19:44:56.045446  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:44:56.062784  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:44:56.080346  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:44:56.098356  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:44:56.115529  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:44:56.133046  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:44:56.151123  213529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:44:56.164371  213529 ssh_runner.go:195] Run: openssl version
	I1009 19:44:56.171082  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:44:56.180682  213529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:44:56.184714  213529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:44:56.184782  213529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:44:56.219575  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:44:56.228330  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:44:56.237302  213529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:56.241163  213529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:56.241221  213529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:56.275220  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:44:56.283849  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:44:56.292853  213529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:44:56.296942  213529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:44:56.297002  213529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:44:56.331446  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:44:56.340819  213529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:44:56.344986  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:44:56.380467  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:44:56.415493  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:44:56.456227  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:44:56.501884  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:44:56.538941  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:44:56.573879  213529 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:44:56.573988  213529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:44:56.574038  213529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:44:56.602728  213529 cri.go:89] found id: ""
	I1009 19:44:56.602785  213529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:44:56.610971  213529 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:44:56.610988  213529 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:44:56.611028  213529 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:44:56.618277  213529 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:44:56.618823  213529 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:44:56.618971  213529 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-137890/kubeconfig needs updating (will repair): [kubeconfig missing "ha-898615" cluster setting kubeconfig missing "ha-898615" context setting]
	I1009 19:44:56.619299  213529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:56.619977  213529 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:44:56.620511  213529 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:44:56.620536  213529 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:44:56.620544  213529 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:44:56.620550  213529 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:44:56.620560  213529 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:44:56.620569  213529 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:44:56.621020  213529 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:44:56.628535  213529 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:44:56.628568  213529 kubeadm.go:601] duration metric: took 17.574485ms to restartPrimaryControlPlane
	I1009 19:44:56.628593  213529 kubeadm.go:402] duration metric: took 54.723918ms to StartCluster
	I1009 19:44:56.628613  213529 settings.go:142] acquiring lock: {Name:mk34466033fb866bc8e167d2d953624ad0802283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:56.628681  213529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:44:56.629423  213529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:56.629662  213529 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:44:56.629817  213529 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:56.629772  213529 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:44:56.629859  213529 addons.go:69] Setting storage-provisioner=true in profile "ha-898615"
	I1009 19:44:56.629886  213529 addons.go:238] Setting addon storage-provisioner=true in "ha-898615"
	I1009 19:44:56.629917  213529 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:44:56.629863  213529 addons.go:69] Setting default-storageclass=true in profile "ha-898615"
	I1009 19:44:56.630024  213529 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-898615"
	I1009 19:44:56.630251  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:56.630340  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:56.639537  213529 out.go:179] * Verifying Kubernetes components...
	I1009 19:44:56.640997  213529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:56.650141  213529 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:44:56.650441  213529 addons.go:238] Setting addon default-storageclass=true in "ha-898615"
	I1009 19:44:56.650481  213529 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:44:56.651083  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:56.651372  213529 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:44:56.652834  213529 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:44:56.652857  213529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:44:56.652904  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:56.672424  213529 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:44:56.672449  213529 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:44:56.672517  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:56.673443  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:56.697607  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:56.750572  213529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:44:56.764278  213529 node_ready.go:35] waiting up to 6m0s for node "ha-898615" to be "Ready" ...
	I1009 19:44:56.790989  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:44:56.807492  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:56.848417  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:56.848472  213529 retry.go:31] will retry after 181.300226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:44:56.863090  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:56.863123  213529 retry.go:31] will retry after 174.582695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.030457  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:44:57.038253  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:57.099728  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.099769  213529 retry.go:31] will retry after 488.394922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:44:57.103491  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.103516  213529 retry.go:31] will retry after 360.880737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.464716  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:57.519993  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.520038  213529 retry.go:31] will retry after 545.599641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.589293  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:57.644623  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.644657  213529 retry.go:31] will retry after 328.462818ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.973799  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:58.029584  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.029617  213529 retry.go:31] will retry after 567.831757ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.065802  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:58.119966  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.120001  213529 retry.go:31] will retry after 1.041516604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.598304  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:58.652889  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.652928  213529 retry.go:31] will retry after 716.276622ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:44:58.765698  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:44:59.162239  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:59.218055  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:59.218096  213529 retry.go:31] will retry after 1.23966397s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:59.370025  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:59.425158  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:59.425205  213529 retry.go:31] will retry after 1.359321817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:00.458325  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:00.515848  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:00.515882  213529 retry.go:31] will retry after 2.661338285s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:00.765913  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:00.785102  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:00.843571  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:00.843612  213529 retry.go:31] will retry after 2.328073619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:03.172702  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:45:03.177348  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:03.230554  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:03.230592  213529 retry.go:31] will retry after 6.157061735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:03.231964  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:03.231992  213529 retry.go:31] will retry after 2.442330177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:03.265673  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:05.674886  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:05.729807  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:05.729849  213529 retry.go:31] will retry after 3.612542584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:05.765524  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:08.265205  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:09.342654  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:45:09.388406  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:09.399682  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:09.399719  213529 retry.go:31] will retry after 6.61412336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:09.445445  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:09.445496  213529 retry.go:31] will retry after 9.139498483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:10.265494  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:12.765436  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:15.265528  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:16.014029  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:16.069677  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:16.069742  213529 retry.go:31] will retry after 11.238798751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:17.265736  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:18.585243  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:18.639573  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:18.639614  213529 retry.go:31] will retry after 11.58446266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:19.765693  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:22.265326  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:24.765252  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:27.265337  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:27.309539  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:27.366695  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:27.366733  213529 retry.go:31] will retry after 11.52939287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:29.765203  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:30.224984  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:30.281273  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:30.281310  213529 retry.go:31] will retry after 18.613032536s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:31.765443  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:34.264978  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:36.265283  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:38.265904  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:38.897369  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:38.954353  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:38.954404  213529 retry.go:31] will retry after 17.265980832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:40.764949  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:42.765551  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:45.265513  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:47.765300  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:48.895015  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:48.951679  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:48.951716  213529 retry.go:31] will retry after 21.892988656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:49.765899  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:52.265621  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:54.765488  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:56.220544  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:56.276573  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:56.276606  213529 retry.go:31] will retry after 23.018555863s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:56.765898  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:59.265629  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:01.765354  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:04.265243  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:06.265681  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:08.765326  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:46:10.845467  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:46:10.902210  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:46:10.902367  213529 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 19:46:11.265125  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:13.265908  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:15.765756  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:18.265480  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:46:19.296047  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:46:19.352095  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:46:19.352218  213529 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:46:19.354638  213529 out.go:179] * Enabled addons: 
	I1009 19:46:19.355945  213529 addons.go:514] duration metric: took 1m22.726170913s for enable addons: enabled=[]
	W1009 19:46:20.265755  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:22.765780  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:25.265698  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:27.765417  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:30.265282  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:32.765120  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:34.765843  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:37.265533  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:39.765374  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:42.265123  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:44.265770  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:46.765835  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:49.265312  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:51.765030  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:53.765470  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:56.265306  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:58.765186  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:01.265058  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:03.265774  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:05.765635  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:08.265576  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:10.765798  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:13.265761  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:15.765119  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:18.265077  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:20.265918  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:22.765962  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:25.264961  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:27.265736  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:29.765764  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:32.265610  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:34.765657  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:37.265747  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:39.765491  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:42.265404  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:44.765558  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:47.265514  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:49.765290  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:52.265293  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:54.765328  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:56.765484  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:59.265305  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:01.765149  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:03.765915  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:06.265924  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:08.765952  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:11.265908  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:13.765815  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:16.264979  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:18.765137  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:20.765889  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:23.265669  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:25.765672  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:28.265325  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:30.765046  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:32.765624  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:35.265556  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:37.265628  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:39.765520  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:42.265310  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:44.765281  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:47.265094  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:49.265648  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:51.765426  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:53.765652  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:56.264999  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:58.265225  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:00.265508  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:02.265792  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:04.765259  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:06.765636  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:09.265082  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:11.265335  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:13.765305  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:15.765755  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:18.264951  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:20.265332  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:22.265635  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:24.265896  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:26.765549  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:29.265008  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:31.265176  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:33.765225  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:35.765576  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:38.265134  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:40.265493  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:42.765451  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:44.765511  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:47.265203  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:49.765355  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:52.265333  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:54.765296  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:56.765453  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:58.765650  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:01.265098  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:03.265263  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:05.265665  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:07.765412  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:09.765500  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:11.765824  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:14.264992  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:16.265244  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:18.265292  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:20.265689  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:22.765039  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:24.765172  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:26.765326  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:29.265183  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:31.764935  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:34.265071  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:36.265451  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:38.265823  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:40.765096  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:43.264909  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:45.265266  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:47.265687  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:49.765194  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:51.765247  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:54.265134  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:56.265319  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:50:56.765195  213529 node_ready.go:38] duration metric: took 6m0.000867219s for node "ha-898615" to be "Ready" ...
	I1009 19:50:56.767874  213529 out.go:203] 
	W1009 19:50:56.769214  213529 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:50:56.769244  213529 out.go:285] * 
	W1009 19:50:56.771156  213529 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:50:56.772598  213529 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:50:51 ha-898615 crio[517]: time="2025-10-09T19:50:51.057365578Z" level=info msg="createCtr: removing container 7bbe8e9f95fa7c2a24ca731e70d1c2050e24d9896658b905bc75af699c95e2b7" id=4cf88d7e-0c00-4b22-a05f-e14968eda2fa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:51 ha-898615 crio[517]: time="2025-10-09T19:50:51.057415207Z" level=info msg="createCtr: deleting container 7bbe8e9f95fa7c2a24ca731e70d1c2050e24d9896658b905bc75af699c95e2b7 from storage" id=4cf88d7e-0c00-4b22-a05f-e14968eda2fa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:51 ha-898615 crio[517]: time="2025-10-09T19:50:51.059666218Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=4cf88d7e-0c00-4b22-a05f-e14968eda2fa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.036434515Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=b5f99731-3bfb-4b7e-9b6e-b74fc0f4378e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.037297245Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=a9c219fc-2038-444a-8d2a-c8ee3d0720cc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.038142766Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-898615/kube-scheduler" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.038360896Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.04203751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.042483314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.060338662Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.06181782Z" level=info msg="createCtr: deleting container ID 9d14890568eccae5c7353af71ccef59cb0fe1dda70f89b6799dd6a12b92042dd from idIndex" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.061852385Z" level=info msg="createCtr: removing container 9d14890568eccae5c7353af71ccef59cb0fe1dda70f89b6799dd6a12b92042dd" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.06188441Z" level=info msg="createCtr: deleting container 9d14890568eccae5c7353af71ccef59cb0fe1dda70f89b6799dd6a12b92042dd from storage" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.064105448Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-898615_kube-system_bc0e16a1814c5389485436acdfc968ed_0" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.0355101Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=0d089190-0255-4132-8944-47e3a171eec9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.036424184Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=a6914955-8723-4589-afa9-5090877fc579 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.037428452Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-898615/kube-controller-manager" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.037644021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.041015164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.041462284Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.057834228Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.059302014Z" level=info msg="createCtr: deleting container ID ce64a8d012383f65cc84c71a301fcffb36f3b33f4d99f9979ab7ca8250098045 from idIndex" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.05934333Z" level=info msg="createCtr: removing container ce64a8d012383f65cc84c71a301fcffb36f3b33f4d99f9979ab7ca8250098045" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.059392517Z" level=info msg="createCtr: deleting container ce64a8d012383f65cc84c71a301fcffb36f3b33f4d99f9979ab7ca8250098045 from storage" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.061445962Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:51:01.017407    2360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:51:01.018013    2360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:51:01.019577    2360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:51:01.020026    2360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:51:01.021608    2360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:51:01 up  1:33,  0 user,  load average: 0.24, 0.07, 0.74
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:50:51 ha-898615 kubelet[666]: E1009 19:50:51.681089     666 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:50:51 ha-898615 kubelet[666]: I1009 19:50:51.858858     666 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:50:51 ha-898615 kubelet[666]: E1009 19:50:51.859301     666 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:50:53 ha-898615 kubelet[666]: E1009 19:50:53.035941     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:50:53 ha-898615 kubelet[666]: E1009 19:50:53.064430     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:50:53 ha-898615 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:50:53 ha-898615 kubelet[666]:  > podSandboxID="829862355c0892a10f586a11617b0eee63c8b9aa21bbf84935814681a67803f6"
	Oct 09 19:50:53 ha-898615 kubelet[666]: E1009 19:50:53.064534     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:50:53 ha-898615 kubelet[666]:         container kube-scheduler start failed in pod kube-scheduler-ha-898615_kube-system(bc0e16a1814c5389485436acdfc968ed): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:50:53 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:50:53 ha-898615 kubelet[666]: E1009 19:50:53.064571     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-898615" podUID="bc0e16a1814c5389485436acdfc968ed"
	Oct 09 19:50:55 ha-898615 kubelet[666]: E1009 19:50:55.842360     666 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.035079     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.051793     666 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.061763     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:50:56 ha-898615 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:50:56 ha-898615 kubelet[666]:  > podSandboxID="c69416833813892406432d22789fcb941cf442d503fc8a7a72d459c819b42203"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.061885     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:50:56 ha-898615 kubelet[666]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:50:56 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.061937     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	Oct 09 19:50:58 ha-898615 kubelet[666]: E1009 19:50:58.083759     666 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-898615.186cea3b954fcad9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-898615,UID:ha-898615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-898615 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-898615,},FirstTimestamp:2025-10-09 19:44:56.024025817 +0000 UTC m=+0.079267356,LastTimestamp:2025-10-09 19:44:56.024025817 +0000 UTC m=+0.079267356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-898615,}"
	Oct 09 19:50:58 ha-898615 kubelet[666]: E1009 19:50:58.682082     666 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:50:58 ha-898615 kubelet[666]: I1009 19:50:58.861025     666 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:50:58 ha-898615 kubelet[666]: E1009 19:50:58.861511     666 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 2 (314.124678ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (1.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-898615" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-898615\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-898615\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-898615\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-898615" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-898615\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-898615\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-898615\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --o
utput json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-898615
helpers_test.go:243: (dbg) docker inspect ha-898615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	        "Created": "2025-10-09T19:27:46.185451139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213747,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:44:49.941874734Z",
	            "FinishedAt": "2025-10-09T19:44:48.613925122Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/hosts",
	        "LogPath": "/var/lib/docker/containers/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e/24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e-json.log",
	        "Name": "/ha-898615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-898615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-898615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24eb0e5b6c52f395b49906db03c4c48a9724af0085bf08f5e0ec87bfb916c53e",
	                "LowerDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab6635fa679dc48c40a8713015caf6e6f0bf4eb735a5ef5cc0b54aaf57bda90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-898615",
	                "Source": "/var/lib/docker/volumes/ha-898615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-898615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-898615",
	                "name.minikube.sigs.k8s.io": "ha-898615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e470e353e06ffdcc8ac77f77b52e04dc5a3b643fb3168ea2b3827d52af8a235b",
	            "SandboxKey": "/var/run/docker/netns/e470e353e06f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-898615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:8e:9e:52:56:fb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a28b71fbb3464ef90fae817eea2f214ee0a3d1fe379af7ad6c6cc41b0261919e",
	                    "EndpointID": "d184af654fb96cd2156924061667ddadda3f85161b00b7d762c0f3c72fcbe2ef",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-898615",
	                        "24eb0e5b6c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-898615 -n ha-898615: exit status 2 (307.013528ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-898615 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ kubectl │ ha-898615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node add --alsologtostderr -v 5                                                    │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node stop m02 --alsologtostderr -v 5                                               │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node start m02 --alsologtostderr -v 5                                              │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │                     │
	│ node    │ ha-898615 node list --alsologtostderr -v 5                                                   │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ stop    │ ha-898615 stop --alsologtostderr -v 5                                                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ ha-898615 start --wait true --alsologtostderr -v 5                                           │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ node    │ ha-898615 node list --alsologtostderr -v 5                                                   │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	│ node    │ ha-898615 node delete m03 --alsologtostderr -v 5                                             │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	│ stop    │ ha-898615 stop --alsologtostderr -v 5                                                        │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │ 09 Oct 25 19:44 UTC │
	│ start   │ ha-898615 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	│ node    │ ha-898615 node add --control-plane --alsologtostderr -v 5                                    │ ha-898615 │ jenkins │ v1.37.0 │ 09 Oct 25 19:50 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:44:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:44:49.701374  213529 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:44:49.701684  213529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:49.701694  213529 out.go:374] Setting ErrFile to fd 2...
	I1009 19:44:49.701699  213529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:49.701891  213529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:44:49.702347  213529 out.go:368] Setting JSON to false
	I1009 19:44:49.703363  213529 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5239,"bootTime":1760033851,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:44:49.703499  213529 start.go:143] virtualization: kvm guest
	I1009 19:44:49.705480  213529 out.go:179] * [ha-898615] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:44:49.706677  213529 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:44:49.706680  213529 notify.go:221] Checking for updates...
	I1009 19:44:49.709030  213529 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:44:49.710400  213529 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:44:49.711704  213529 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:44:49.712804  213529 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:44:49.713905  213529 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:44:49.715428  213529 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:49.715879  213529 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:44:49.737923  213529 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:44:49.738109  213529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:44:49.796426  213529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:44:49.785317755 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:44:49.796539  213529 docker.go:319] overlay module found
	I1009 19:44:49.801541  213529 out.go:179] * Using the docker driver based on existing profile
	I1009 19:44:49.802798  213529 start.go:309] selected driver: docker
	I1009 19:44:49.802817  213529 start.go:930] validating driver "docker" against &{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:44:49.802903  213529 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:44:49.802989  213529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:44:49.866941  213529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:44:49.857185251 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:44:49.867781  213529 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:44:49.867825  213529 cni.go:84] Creating CNI manager for ""
	I1009 19:44:49.867876  213529 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:44:49.867941  213529 start.go:353] cluster config:
	{Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 19:44:49.869783  213529 out.go:179] * Starting "ha-898615" primary control-plane node in "ha-898615" cluster
	I1009 19:44:49.871046  213529 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:44:49.872323  213529 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:44:49.873634  213529 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:44:49.873676  213529 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:44:49.873671  213529 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:44:49.873684  213529 cache.go:58] Caching tarball of preloaded images
	I1009 19:44:49.873769  213529 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:44:49.873780  213529 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:44:49.873868  213529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:44:49.894117  213529 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:44:49.894140  213529 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:44:49.894160  213529 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:44:49.894193  213529 start.go:361] acquireMachinesLock for ha-898615: {Name:mk23a3ebf19c307a491c00f9452d757837bd240e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:44:49.894262  213529 start.go:365] duration metric: took 46.947µs to acquireMachinesLock for "ha-898615"
	I1009 19:44:49.894284  213529 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:44:49.894295  213529 fix.go:55] fixHost starting: 
	I1009 19:44:49.894546  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:49.912866  213529 fix.go:113] recreateIfNeeded on ha-898615: state=Stopped err=<nil>
	W1009 19:44:49.912910  213529 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:44:49.914819  213529 out.go:252] * Restarting existing docker container for "ha-898615" ...
	I1009 19:44:49.914886  213529 cli_runner.go:164] Run: docker start ha-898615
	I1009 19:44:50.154621  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:50.173856  213529 kic.go:430] container "ha-898615" state is running.
	I1009 19:44:50.174272  213529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:50.192860  213529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/config.json ...
	I1009 19:44:50.193122  213529 machine.go:93] provisionDockerMachine start ...
	I1009 19:44:50.193203  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:50.211807  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:50.212085  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:50.212111  213529 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:44:50.212792  213529 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45050->127.0.0.1:32793: read: connection reset by peer
	I1009 19:44:53.362882  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:44:53.362920  213529 ubuntu.go:182] provisioning hostname "ha-898615"
	I1009 19:44:53.363008  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:53.383229  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:53.383482  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:53.383500  213529 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-898615 && echo "ha-898615" | sudo tee /etc/hostname
	I1009 19:44:53.540739  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-898615
	
	I1009 19:44:53.540832  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:53.559203  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:53.559489  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:53.559515  213529 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-898615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-898615/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-898615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:44:53.707903  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:44:53.707951  213529 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 19:44:53.707980  213529 ubuntu.go:190] setting up certificates
	I1009 19:44:53.707995  213529 provision.go:84] configureAuth start
	I1009 19:44:53.708056  213529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:53.726880  213529 provision.go:143] copyHostCerts
	I1009 19:44:53.726919  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:44:53.726954  213529 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 19:44:53.726969  213529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 19:44:53.727040  213529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 19:44:53.727121  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:44:53.727138  213529 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 19:44:53.727144  213529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 19:44:53.727170  213529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 19:44:53.727216  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:44:53.727232  213529 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 19:44:53.727242  213529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 19:44:53.727264  213529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 19:44:53.727314  213529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.ha-898615 san=[127.0.0.1 192.168.49.2 ha-898615 localhost minikube]
	I1009 19:44:53.827371  213529 provision.go:177] copyRemoteCerts
	I1009 19:44:53.827447  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:44:53.827485  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:53.846303  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:53.951136  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:44:53.951199  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:44:53.969281  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:44:53.969347  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:44:53.987249  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:44:53.987314  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:44:54.005891  213529 provision.go:87] duration metric: took 297.874582ms to configureAuth
	I1009 19:44:54.005921  213529 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:44:54.006109  213529 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:54.006224  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.024397  213529 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:54.024626  213529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:44:54.024642  213529 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:44:54.289546  213529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:44:54.289573  213529 machine.go:96] duration metric: took 4.096433967s to provisionDockerMachine
	I1009 19:44:54.289589  213529 start.go:294] postStartSetup for "ha-898615" (driver="docker")
	I1009 19:44:54.289601  213529 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:44:54.289664  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:44:54.289714  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.308340  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.413217  213529 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:44:54.417126  213529 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:44:54.417190  213529 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:44:54.417225  213529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 19:44:54.417286  213529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 19:44:54.417372  213529 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 19:44:54.417406  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /etc/ssl/certs/1415192.pem
	I1009 19:44:54.417501  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:44:54.425333  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:44:54.443854  213529 start.go:297] duration metric: took 154.246925ms for postStartSetup
	I1009 19:44:54.443940  213529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:44:54.443976  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.461915  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.563125  213529 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:44:54.568111  213529 fix.go:57] duration metric: took 4.673810177s for fixHost
	I1009 19:44:54.568142  213529 start.go:84] releasing machines lock for "ha-898615", held for 4.673868514s
	I1009 19:44:54.568206  213529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-898615
	I1009 19:44:54.586879  213529 ssh_runner.go:195] Run: cat /version.json
	I1009 19:44:54.586918  213529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:44:54.586944  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.586979  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:54.606718  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.607259  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:54.762981  213529 ssh_runner.go:195] Run: systemctl --version
	I1009 19:44:54.769817  213529 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:44:54.808737  213529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:44:54.813835  213529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:44:54.813899  213529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:44:54.822567  213529 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:44:54.822602  213529 start.go:496] detecting cgroup driver to use...
	I1009 19:44:54.822639  213529 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:44:54.822691  213529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:44:54.837558  213529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:44:54.850649  213529 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:44:54.850721  213529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:44:54.865664  213529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:44:54.878415  213529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:44:54.957542  213529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:44:55.038948  213529 docker.go:234] disabling docker service ...
	I1009 19:44:55.039033  213529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:44:55.054311  213529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:44:55.066894  213529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:44:55.146756  213529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:44:55.226886  213529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:44:55.239751  213529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:44:55.254322  213529 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:44:55.254392  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.263683  213529 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:44:55.263764  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.272570  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.281877  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.291212  213529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:44:55.299205  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.308053  213529 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.316488  213529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:55.325623  213529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:44:55.333246  213529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:44:55.340957  213529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:55.420337  213529 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:44:55.530206  213529 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:44:55.530277  213529 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:44:55.534555  213529 start.go:564] Will wait 60s for crictl version
	I1009 19:44:55.534616  213529 ssh_runner.go:195] Run: which crictl
	I1009 19:44:55.538439  213529 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:44:55.564260  213529 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:44:55.564337  213529 ssh_runner.go:195] Run: crio --version
	I1009 19:44:55.593049  213529 ssh_runner.go:195] Run: crio --version
	I1009 19:44:55.622200  213529 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:44:55.623540  213529 cli_runner.go:164] Run: docker network inspect ha-898615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:44:55.641466  213529 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:44:55.646233  213529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:44:55.657668  213529 kubeadm.go:883] updating cluster {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:44:55.657780  213529 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:44:55.657822  213529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:44:55.689903  213529 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:44:55.689929  213529 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:44:55.689989  213529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:44:55.716841  213529 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:44:55.716874  213529 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:44:55.716885  213529 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:44:55.717021  213529 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-898615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:44:55.717104  213529 ssh_runner.go:195] Run: crio config
	I1009 19:44:55.762724  213529 cni.go:84] Creating CNI manager for ""
	I1009 19:44:55.762743  213529 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:44:55.762760  213529 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:44:55.762781  213529 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-898615 NodeName:ha-898615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:44:55.762917  213529 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-898615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:44:55.762981  213529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:44:55.771348  213529 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:44:55.771430  213529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:44:55.779128  213529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:44:55.792326  213529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:44:55.805801  213529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:44:55.818503  213529 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:44:55.822410  213529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:44:55.832657  213529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:55.914951  213529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:44:55.941861  213529 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615 for IP: 192.168.49.2
	I1009 19:44:55.941890  213529 certs.go:195] generating shared ca certs ...
	I1009 19:44:55.941926  213529 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:55.942116  213529 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 19:44:55.942169  213529 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 19:44:55.942183  213529 certs.go:257] generating profile certs ...
	I1009 19:44:55.942287  213529 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key
	I1009 19:44:55.942359  213529 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key.ff60cacd
	I1009 19:44:55.942424  213529 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key
	I1009 19:44:55.942440  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:44:55.942457  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:44:55.942474  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:44:55.942488  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:44:55.942501  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:44:55.942518  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:44:55.942537  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:44:55.942552  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:44:55.942619  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 19:44:55.942659  213529 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 19:44:55.942668  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:44:55.942696  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:44:55.942725  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:44:55.942757  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 19:44:55.942808  213529 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 19:44:55.942845  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> /usr/share/ca-certificates/1415192.pem
	I1009 19:44:55.942867  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:55.942884  213529 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem -> /usr/share/ca-certificates/141519.pem
	I1009 19:44:55.943621  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:44:55.964066  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:44:55.983870  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:44:56.003424  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:44:56.027059  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 19:44:56.045446  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:44:56.062784  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:44:56.080346  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:44:56.098356  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 19:44:56.115529  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:44:56.133046  213529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 19:44:56.151123  213529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:44:56.164371  213529 ssh_runner.go:195] Run: openssl version
	I1009 19:44:56.171082  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 19:44:56.180682  213529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 19:44:56.184714  213529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 19:44:56.184782  213529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 19:44:56.219575  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:44:56.228330  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:44:56.237302  213529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:56.241163  213529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:56.241221  213529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:44:56.275220  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:44:56.283849  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 19:44:56.292853  213529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 19:44:56.296942  213529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 19:44:56.297002  213529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 19:44:56.331446  213529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 19:44:56.340819  213529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:44:56.344986  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:44:56.380467  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:44:56.415493  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:44:56.456227  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:44:56.501884  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:44:56.538941  213529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:44:56.573879  213529 kubeadm.go:400] StartCluster: {Name:ha-898615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-898615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:44:56.573988  213529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:44:56.574038  213529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:44:56.602728  213529 cri.go:89] found id: ""
	I1009 19:44:56.602785  213529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:44:56.610971  213529 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:44:56.610988  213529 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:44:56.611028  213529 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:44:56.618277  213529 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:44:56.618823  213529 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-898615" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:44:56.618971  213529 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-137890/kubeconfig needs updating (will repair): [kubeconfig missing "ha-898615" cluster setting kubeconfig missing "ha-898615" context setting]
	I1009 19:44:56.619299  213529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:56.619977  213529 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:44:56.620511  213529 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:44:56.620536  213529 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:44:56.620544  213529 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:44:56.620550  213529 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:44:56.620560  213529 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:44:56.620569  213529 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:44:56.621020  213529 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:44:56.628535  213529 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:44:56.628568  213529 kubeadm.go:601] duration metric: took 17.574485ms to restartPrimaryControlPlane
	I1009 19:44:56.628593  213529 kubeadm.go:402] duration metric: took 54.723918ms to StartCluster
	I1009 19:44:56.628613  213529 settings.go:142] acquiring lock: {Name:mk34466033fb866bc8e167d2d953624ad0802283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:56.628681  213529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:44:56.629423  213529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/kubeconfig: {Name:mk5c9daa45c28055e34aff375b7036de6bf3de3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:44:56.629662  213529 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:44:56.629817  213529 config.go:182] Loaded profile config "ha-898615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:56.629772  213529 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:44:56.629859  213529 addons.go:69] Setting storage-provisioner=true in profile "ha-898615"
	I1009 19:44:56.629886  213529 addons.go:238] Setting addon storage-provisioner=true in "ha-898615"
	I1009 19:44:56.629917  213529 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:44:56.629863  213529 addons.go:69] Setting default-storageclass=true in profile "ha-898615"
	I1009 19:44:56.630024  213529 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-898615"
	I1009 19:44:56.630251  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:56.630340  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:56.639537  213529 out.go:179] * Verifying Kubernetes components...
	I1009 19:44:56.640997  213529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:56.650141  213529 kapi.go:59] client config for ha-898615: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/profiles/ha-898615/client.key", CAFile:"/home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:44:56.650441  213529 addons.go:238] Setting addon default-storageclass=true in "ha-898615"
	I1009 19:44:56.650481  213529 host.go:66] Checking if "ha-898615" exists ...
	I1009 19:44:56.651083  213529 cli_runner.go:164] Run: docker container inspect ha-898615 --format={{.State.Status}}
	I1009 19:44:56.651372  213529 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:44:56.652834  213529 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:44:56.652857  213529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:44:56.652904  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:56.672424  213529 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:44:56.672449  213529 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:44:56.672517  213529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-898615
	I1009 19:44:56.673443  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:56.697607  213529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/ha-898615/id_rsa Username:docker}
	I1009 19:44:56.750572  213529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:44:56.764278  213529 node_ready.go:35] waiting up to 6m0s for node "ha-898615" to be "Ready" ...
	I1009 19:44:56.790989  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:44:56.807492  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:56.848417  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:56.848472  213529 retry.go:31] will retry after 181.300226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:44:56.863090  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:56.863123  213529 retry.go:31] will retry after 174.582695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.030457  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:44:57.038253  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:57.099728  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.099769  213529 retry.go:31] will retry after 488.394922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:44:57.103491  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.103516  213529 retry.go:31] will retry after 360.880737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.464716  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:57.519993  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.520038  213529 retry.go:31] will retry after 545.599641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.589293  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:57.644623  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.644657  213529 retry.go:31] will retry after 328.462818ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:57.973799  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:58.029584  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.029617  213529 retry.go:31] will retry after 567.831757ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.065802  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:58.119966  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.120001  213529 retry.go:31] will retry after 1.041516604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.598304  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:58.652889  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:58.652928  213529 retry.go:31] will retry after 716.276622ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:44:58.765698  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:44:59.162239  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:44:59.218055  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:59.218096  213529 retry.go:31] will retry after 1.23966397s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:59.370025  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:44:59.425158  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:44:59.425205  213529 retry.go:31] will retry after 1.359321817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:00.458325  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:00.515848  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:00.515882  213529 retry.go:31] will retry after 2.661338285s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:00.765913  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:00.785102  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:00.843571  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:00.843612  213529 retry.go:31] will retry after 2.328073619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:03.172702  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:45:03.177348  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:03.230554  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:03.230592  213529 retry.go:31] will retry after 6.157061735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:03.231964  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:03.231992  213529 retry.go:31] will retry after 2.442330177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:03.265673  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:05.674886  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:05.729807  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:05.729849  213529 retry.go:31] will retry after 3.612542584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:05.765524  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:08.265205  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:09.342654  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:45:09.388406  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:09.399682  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:09.399719  213529 retry.go:31] will retry after 6.61412336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:09.445445  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:09.445496  213529 retry.go:31] will retry after 9.139498483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:10.265494  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:12.765436  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:15.265528  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:16.014029  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:16.069677  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:16.069742  213529 retry.go:31] will retry after 11.238798751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:17.265736  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:18.585243  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:18.639573  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:18.639614  213529 retry.go:31] will retry after 11.58446266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:19.765693  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:22.265326  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:24.765252  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:27.265337  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:27.309539  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:27.366695  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:27.366733  213529 retry.go:31] will retry after 11.52939287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:29.765203  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:30.224984  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:30.281273  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:30.281310  213529 retry.go:31] will retry after 18.613032536s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:31.765443  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:34.264978  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:36.265283  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:38.265904  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:38.897369  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:38.954353  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:38.954404  213529 retry.go:31] will retry after 17.265980832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:40.764949  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:42.765551  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:45.265513  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:47.765300  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:48.895015  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:45:48.951679  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:48.951716  213529 retry.go:31] will retry after 21.892988656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:49.765899  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:52.265621  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:54.765488  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:45:56.220544  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:45:56.276573  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:45:56.276606  213529 retry.go:31] will retry after 23.018555863s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:45:56.765898  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:45:59.265629  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:01.765354  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:04.265243  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:06.265681  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:08.765326  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:46:10.845467  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:46:10.902210  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:46:10.902367  213529 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 19:46:11.265125  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:13.265908  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:15.765756  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:18.265480  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:46:19.296047  213529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:46:19.352095  213529 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:46:19.352218  213529 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:46:19.354638  213529 out.go:179] * Enabled addons: 
	I1009 19:46:19.355945  213529 addons.go:514] duration metric: took 1m22.726170913s for enable addons: enabled=[]
	W1009 19:46:20.265755  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:22.765780  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:25.265698  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:27.765417  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:30.265282  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:32.765120  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:34.765843  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:37.265533  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:39.765374  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:42.265123  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:44.265770  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:46.765835  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:49.265312  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:51.765030  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:53.765470  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:56.265306  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:46:58.765186  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:01.265058  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:03.265774  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:05.765635  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:08.265576  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:10.765798  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:13.265761  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:15.765119  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:18.265077  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:20.265918  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:22.765962  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:25.264961  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:27.265736  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:29.765764  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:32.265610  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:34.765657  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:37.265747  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:39.765491  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:42.265404  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:44.765558  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:47.265514  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:49.765290  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:52.265293  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:54.765328  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:56.765484  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:47:59.265305  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:01.765149  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:03.765915  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:06.265924  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:08.765952  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:11.265908  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:13.765815  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:16.264979  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:18.765137  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:20.765889  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:23.265669  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:25.765672  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:28.265325  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:30.765046  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:32.765624  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:35.265556  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:37.265628  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:39.765520  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:42.265310  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:44.765281  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:47.265094  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:49.265648  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:51.765426  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:53.765652  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:56.264999  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:48:58.265225  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:00.265508  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:02.265792  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:04.765259  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:06.765636  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:09.265082  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:11.265335  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:13.765305  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:15.765755  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:18.264951  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:20.265332  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:22.265635  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:24.265896  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:26.765549  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:29.265008  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:31.265176  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:33.765225  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:35.765576  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:38.265134  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:40.265493  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:42.765451  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:44.765511  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:47.265203  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:49.765355  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:52.265333  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:54.765296  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:56.765453  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:49:58.765650  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:01.265098  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:03.265263  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:05.265665  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:07.765412  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:09.765500  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:11.765824  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:14.264992  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:16.265244  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:18.265292  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:20.265689  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:22.765039  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:24.765172  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:26.765326  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:29.265183  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:31.764935  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:34.265071  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:36.265451  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:38.265823  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:40.765096  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:43.264909  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:45.265266  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:47.265687  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:49.765194  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:51.765247  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:54.265134  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:50:56.265319  213529 node_ready.go:55] error getting node "ha-898615" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-898615": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:50:56.765195  213529 node_ready.go:38] duration metric: took 6m0.000867219s for node "ha-898615" to be "Ready" ...
	I1009 19:50:56.767874  213529 out.go:203] 
	W1009 19:50:56.769214  213529 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:50:56.769244  213529 out.go:285] * 
	W1009 19:50:56.771156  213529 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:50:56.772598  213529 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.061852385Z" level=info msg="createCtr: removing container 9d14890568eccae5c7353af71ccef59cb0fe1dda70f89b6799dd6a12b92042dd" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.06188441Z" level=info msg="createCtr: deleting container 9d14890568eccae5c7353af71ccef59cb0fe1dda70f89b6799dd6a12b92042dd from storage" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:53 ha-898615 crio[517]: time="2025-10-09T19:50:53.064105448Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-898615_kube-system_bc0e16a1814c5389485436acdfc968ed_0" id=d7600065-5a91-46f4-affe-f31461470a38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.0355101Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=0d089190-0255-4132-8944-47e3a171eec9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.036424184Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=a6914955-8723-4589-afa9-5090877fc579 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.037428452Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-898615/kube-controller-manager" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.037644021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.041015164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.041462284Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.057834228Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.059302014Z" level=info msg="createCtr: deleting container ID ce64a8d012383f65cc84c71a301fcffb36f3b33f4d99f9979ab7ca8250098045 from idIndex" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.05934333Z" level=info msg="createCtr: removing container ce64a8d012383f65cc84c71a301fcffb36f3b33f4d99f9979ab7ca8250098045" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.059392517Z" level=info msg="createCtr: deleting container ce64a8d012383f65cc84c71a301fcffb36f3b33f4d99f9979ab7ca8250098045 from storage" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:50:56 ha-898615 crio[517]: time="2025-10-09T19:50:56.061445962Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-898615_kube-system_606a9e8949f01295d66148e5eac379ce_0" id=579fa4c9-b011-47e2-bcb0-b86079e824ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:51:02 ha-898615 crio[517]: time="2025-10-09T19:51:02.036295187Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=79166897-244b-40c3-936b-1110aa817d24 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:51:02 ha-898615 crio[517]: time="2025-10-09T19:51:02.037340881Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=255c4492-6418-4eb8-9a0d-392eeb92b606 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:51:02 ha-898615 crio[517]: time="2025-10-09T19:51:02.038338596Z" level=info msg="Creating container: kube-system/etcd-ha-898615/etcd" id=5092194d-0bbc-436a-9c4b-0d2dbf2d6b99 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:51:02 ha-898615 crio[517]: time="2025-10-09T19:51:02.03865221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:51:02 ha-898615 crio[517]: time="2025-10-09T19:51:02.043829712Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:51:02 ha-898615 crio[517]: time="2025-10-09T19:51:02.044435119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:51:02 ha-898615 crio[517]: time="2025-10-09T19:51:02.061115284Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=5092194d-0bbc-436a-9c4b-0d2dbf2d6b99 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:51:02 ha-898615 crio[517]: time="2025-10-09T19:51:02.062888673Z" level=info msg="createCtr: deleting container ID 8a78c6bb518e947020901e0cc329e5a59bcede8b2dd3e5b145a39c5e1597b03a from idIndex" id=5092194d-0bbc-436a-9c4b-0d2dbf2d6b99 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:51:02 ha-898615 crio[517]: time="2025-10-09T19:51:02.062942975Z" level=info msg="createCtr: removing container 8a78c6bb518e947020901e0cc329e5a59bcede8b2dd3e5b145a39c5e1597b03a" id=5092194d-0bbc-436a-9c4b-0d2dbf2d6b99 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:51:02 ha-898615 crio[517]: time="2025-10-09T19:51:02.062987056Z" level=info msg="createCtr: deleting container 8a78c6bb518e947020901e0cc329e5a59bcede8b2dd3e5b145a39c5e1597b03a from storage" id=5092194d-0bbc-436a-9c4b-0d2dbf2d6b99 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:51:02 ha-898615 crio[517]: time="2025-10-09T19:51:02.065559554Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-898615_kube-system_2d43b86ac03e93e11f35c923e2560103_0" id=5092194d-0bbc-436a-9c4b-0d2dbf2d6b99 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:51:02.674751    2539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:51:02.675343    2539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:51:02.677902    2539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:51:02.678450    2539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:51:02.680224    2539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:51:02 up  1:33,  0 user,  load average: 0.24, 0.07, 0.74
	Linux ha-898615 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:50:53 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:50:53 ha-898615 kubelet[666]: E1009 19:50:53.064571     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-898615" podUID="bc0e16a1814c5389485436acdfc968ed"
	Oct 09 19:50:55 ha-898615 kubelet[666]: E1009 19:50:55.842360     666 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.035079     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.051793     666 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-898615\" not found"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.061763     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:50:56 ha-898615 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:50:56 ha-898615 kubelet[666]:  > podSandboxID="c69416833813892406432d22789fcb941cf442d503fc8a7a72d459c819b42203"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.061885     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:50:56 ha-898615 kubelet[666]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-898615_kube-system(606a9e8949f01295d66148e5eac379ce): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:50:56 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:50:56 ha-898615 kubelet[666]: E1009 19:50:56.061937     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-898615" podUID="606a9e8949f01295d66148e5eac379ce"
	Oct 09 19:50:58 ha-898615 kubelet[666]: E1009 19:50:58.083759     666 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-898615.186cea3b954fcad9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-898615,UID:ha-898615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-898615 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-898615,},FirstTimestamp:2025-10-09 19:44:56.024025817 +0000 UTC m=+0.079267356,LastTimestamp:2025-10-09 19:44:56.024025817 +0000 UTC m=+0.079267356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-898615,}"
	Oct 09 19:50:58 ha-898615 kubelet[666]: E1009 19:50:58.682082     666 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-898615?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:50:58 ha-898615 kubelet[666]: I1009 19:50:58.861025     666 kubelet_node_status.go:75] "Attempting to register node" node="ha-898615"
	Oct 09 19:50:58 ha-898615 kubelet[666]: E1009 19:50:58.861511     666 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-898615"
	Oct 09 19:51:02 ha-898615 kubelet[666]: E1009 19:51:02.035731     666 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-898615\" not found" node="ha-898615"
	Oct 09 19:51:02 ha-898615 kubelet[666]: E1009 19:51:02.065926     666 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:51:02 ha-898615 kubelet[666]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:51:02 ha-898615 kubelet[666]:  > podSandboxID="cf0e452511867f381d80e053fda57591ecec23f4216ae7ede03cff3c28a2614c"
	Oct 09 19:51:02 ha-898615 kubelet[666]: E1009 19:51:02.066042     666 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:51:02 ha-898615 kubelet[666]:         container etcd start failed in pod etcd-ha-898615_kube-system(2d43b86ac03e93e11f35c923e2560103): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:51:02 ha-898615 kubelet[666]:  > logger="UnhandledError"
	Oct 09 19:51:02 ha-898615 kubelet[666]: E1009 19:51:02.066079     666 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-898615" podUID="2d43b86ac03e93e11f35c923e2560103"
	Oct 09 19:51:02 ha-898615 kubelet[666]: E1009 19:51:02.344991     666 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-898615&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-898615 -n ha-898615: exit status 2 (306.539247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-898615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (498.05s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-487749 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1009 19:53:37.184597  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:58:37.185217  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-487749 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (8m18.052371359s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5d992eb1-84d3-4ce1-9842-892bd88c17fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-487749] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"42b862fe-8ce8-4a65-9cca-bb6548188267","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"234d2cdc-fbf9-45d1-b26b-6cc1389a5433","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"09ec2759-0355-476f-b84d-069844751046","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig"}}
	{"specversion":"1.0","id":"7ef1fbd8-524f-4ebf-b09c-3f0828549d64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube"}}
	{"specversion":"1.0","id":"3a5f8e28-2dc9-45f2-93a6-37bae921db00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3c4abc49-32ce-4638-a8be-64aee4541de8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"195c9a0d-7371-469a-a4f8-3c9677d0784e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d5e973e-f2ea-41d8-90ae-50598ec3fa82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3a5c10af-4e2d-4565-9476-4c084d08f9a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-487749\" primary control-plane node in \"json-output-487749\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f05674ac-3cd4-4798-8c23-212d33a4f1fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759745255-21703 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"49973140-cf14-4309-90a4-c39fd7c40250","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6fe80abe-299c-4c13-bc01-0dd06b09b279","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"11","message":"Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...","name":"Preparing Kubernetes","totalsteps":"19"}}
	{"specversion":"1.0","id":"afd6d4da-ce07-45bc-9e47-e0bcfeb6dbaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"bfc1bd46-decb-4cb8-8b11-48e4d097c3b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d639120-52d1-4ad4-919d-6cc7f03a6d1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Pri
nting the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\
n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-487749 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-487749 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writi
ng \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the ku
belet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.187131ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000246367s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000334968s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000502329s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using
your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check
failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"dc041af6-3894-49bb-8131-615def201508","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"071bfe1e-4e4b-4366-869d-791127486191","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"8cf00656-a191-421b-aa0d-203781755f4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the outpu
t from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using
existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[
etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/health
z. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001886766s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.00060807s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001051434s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.001002366s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pa
use'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:1025
7/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"01d60ff4-b028-4f2f-bb0e-cc6bd216e075","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system v
erification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/va
r/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy ku
belet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001886766s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.00060807s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001051434s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.001002366s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cr
io.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager
check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher","name":"GUEST_START","url":""}}
	{"specversion":"1.0","id":"dec50c94-f10e-4c88-8f62-3f3c2f6f63c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 start -p json-output-487749 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio": exit status 80
--- FAIL: TestJSONOutput/start/Command (498.05s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 12 has already been assigned to another step:
Generating certificates and keys ...
Cannot use for:
Generating certificates and keys ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 5d992eb1-84d3-4ce1-9842-892bd88c17fa
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-487749] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 42b862fe-8ce8-4a65-9cca-bb6548188267
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21683"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 234d2cdc-fbf9-45d1-b26b-6cc1389a5433
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 09ec2759-0355-476f-b84d-069844751046
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 7ef1fbd8-524f-4ebf-b09c-3f0828549d64
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3a5f8e28-2dc9-45f2-93a6-37bae921db00
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3c4abc49-32ce-4638-a8be-64aee4541de8
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 195c9a0d-7371-469a-a4f8-3c9677d0784e
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 2d5e973e-f2ea-41d8-90ae-50598ec3fa82
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3a5c10af-4e2d-4565-9476-4c084d08f9a1
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-487749\" primary control-plane node in \"json-output-487749\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f05674ac-3cd4-4798-8c23-212d33a4f1fb
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759745255-21703 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 49973140-cf14-4309-90a4-c39fd7c40250
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 6fe80abe-299c-4c13-bc01-0dd06b09b279
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: afd6d4da-ce07-45bc-9e47-e0bcfeb6dbaf
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: bfc1bd46-decb-4cb8-8b11-48e4d097c3b6
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 2d639120-52d1-4ad4-919d-6cc7f03a6d1a
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-487749 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-487749 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.187131ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000246367s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000334968s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000502329s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cr
io.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:
10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: dc041af6-3894-49bb-8131-615def201508
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 071bfe1e-4e4b-4366-869d-791127486191
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 8cf00656-a191-421b-aa0d-203781755f4f
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001886766s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.00060807s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001051434s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.001002366s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WAR
NING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 01d60ff4-b028-4f2f-bb0e-cc6bd216e075
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001886766s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.00060807s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001051434s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.001002366s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNI
NG SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: dec50c94-f10e-4c88-8f62-3f3c2f6f63c3
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 5d992eb1-84d3-4ce1-9842-892bd88c17fa
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-487749] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 42b862fe-8ce8-4a65-9cca-bb6548188267
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21683"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 234d2cdc-fbf9-45d1-b26b-6cc1389a5433
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 09ec2759-0355-476f-b84d-069844751046
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 7ef1fbd8-524f-4ebf-b09c-3f0828549d64
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3a5f8e28-2dc9-45f2-93a6-37bae921db00
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3c4abc49-32ce-4638-a8be-64aee4541de8
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 195c9a0d-7371-469a-a4f8-3c9677d0784e
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 2d5e973e-f2ea-41d8-90ae-50598ec3fa82
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3a5c10af-4e2d-4565-9476-4c084d08f9a1
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-487749\" primary control-plane node in \"json-output-487749\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f05674ac-3cd4-4798-8c23-212d33a4f1fb
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759745255-21703 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 49973140-cf14-4309-90a4-c39fd7c40250
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 6fe80abe-299c-4c13-bc01-0dd06b09b279
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: afd6d4da-ce07-45bc-9e47-e0bcfeb6dbaf
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: bfc1bd46-decb-4cb8-8b11-48e4d097c3b6
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 2d639120-52d1-4ad4-919d-6cc7f03a6d1a
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-487749 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-487749 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.187131ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000246367s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000334968s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000502329s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cr
io.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:
10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: dc041af6-3894-49bb-8131-615def201508
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 071bfe1e-4e4b-4366-869d-791127486191
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 8cf00656-a191-421b-aa0d-203781755f4f
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001886766s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.00060807s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001051434s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.001002366s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WAR
NING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 01d60ff4-b028-4f2f-bb0e-cc6bd216e075
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001886766s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.00060807s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001051434s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.001002366s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNI
NG SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: dec50c94-f10e-4c88-8f62-3f3c2f6f63c3
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestMinikubeProfile (504.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-227352 --driver=docker  --container-runtime=crio
E1009 20:03:37.175717  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:08:37.184602  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p first-227352 --driver=docker  --container-runtime=crio: exit status 80 (8m21.394754759s)

                                                
                                                
-- stdout --
	* [first-227352] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "first-227352" primary control-plane node in "first-227352" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-227352 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-227352 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.937704ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000367886s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000422138s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000684111s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.098333ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001148611s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001186548s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001177739s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.098333ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001148611s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001186548s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001177739s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-linux-amd64 start -p first-227352 --driver=docker  --container-runtime=crio": exit status 80
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-09 20:10:01.700355005 +0000 UTC m=+5444.462021295
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect second-229814
helpers_test.go:239: (dbg) Non-zero exit: docker inspect second-229814: exit status 1 (29.480115ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: second-229814

                                                
                                                
** /stderr **
helpers_test.go:241: failed to get docker inspect: exit status 1
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p second-229814 -n second-229814
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p second-229814 -n second-229814: exit status 85 (58.095835ms)

                                                
                                                
-- stdout --
	* Profile "second-229814" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-229814"

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 85 (may be ok)
helpers_test.go:249: "second-229814" host is not running, skipping log retrieval (state="* Profile \"second-229814\" not found. Run \"minikube profile list\" to view all profiles.")
helpers_test.go:175: Cleaning up "second-229814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-229814
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-09 20:10:01.935424876 +0000 UTC m=+5444.697091162
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect first-227352
helpers_test.go:243: (dbg) docker inspect first-227352:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b3aaf18427349d9d6c87fba263ce00e43188961635ca478998db86029e6b7638",
	        "Created": "2025-10-09T20:01:45.720449678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246717,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:01:45.763857258Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/b3aaf18427349d9d6c87fba263ce00e43188961635ca478998db86029e6b7638/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b3aaf18427349d9d6c87fba263ce00e43188961635ca478998db86029e6b7638/hostname",
	        "HostsPath": "/var/lib/docker/containers/b3aaf18427349d9d6c87fba263ce00e43188961635ca478998db86029e6b7638/hosts",
	        "LogPath": "/var/lib/docker/containers/b3aaf18427349d9d6c87fba263ce00e43188961635ca478998db86029e6b7638/b3aaf18427349d9d6c87fba263ce00e43188961635ca478998db86029e6b7638-json.log",
	        "Name": "/first-227352",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "first-227352:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "first-227352",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b3aaf18427349d9d6c87fba263ce00e43188961635ca478998db86029e6b7638",
	                "LowerDir": "/var/lib/docker/overlay2/6ddd52663682aad2898366e9565709871b5022067e1fa859c35581c0edf9af13-init/diff:/var/lib/docker/overlay2/c193b84efd0bb6d34037a32737bdcae746717818030dd5526bd386bc03236168/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ddd52663682aad2898366e9565709871b5022067e1fa859c35581c0edf9af13/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ddd52663682aad2898366e9565709871b5022067e1fa859c35581c0edf9af13/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ddd52663682aad2898366e9565709871b5022067e1fa859c35581c0edf9af13/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "first-227352",
	                "Source": "/var/lib/docker/volumes/first-227352/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "first-227352",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "first-227352",
	                "name.minikube.sigs.k8s.io": "first-227352",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "32df9f12e73635dbaf2ea54ec29f5227174e1995d442acf2ab6ec78faf393712",
	            "SandboxKey": "/var/run/docker/netns/32df9f12e736",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "first-227352": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:6e:2a:42:9b:56",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "721f900d7f24ef3f8261c56d01eb6f19d8195e1d8e1c77b0f59ef1926a29ace5",
	                    "EndpointID": "8a9e92ef9c82c66a4548738c2bd8d4ceaa5b56802b376023abe6e5e4eee217f3",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "first-227352",
	                        "b3aaf1842734"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p first-227352 -n first-227352
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p first-227352 -n first-227352: exit status 6 (305.762689ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:10:02.244310  251231 status.go:458] kubeconfig endpoint: get endpoint: "first-227352" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMinikubeProfile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMinikubeProfile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p first-227352 logs -n 25
helpers_test.go:260: TestMinikubeProfile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │         PROFILE          │   USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ ha-898615 node delete m03 --alsologtostderr -v 5                                                                        │ ha-898615                │ jenkins  │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	│ stop    │ ha-898615 stop --alsologtostderr -v 5                                                                                   │ ha-898615                │ jenkins  │ v1.37.0 │ 09 Oct 25 19:44 UTC │ 09 Oct 25 19:44 UTC │
	│ start   │ ha-898615 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                            │ ha-898615                │ jenkins  │ v1.37.0 │ 09 Oct 25 19:44 UTC │                     │
	│ node    │ ha-898615 node add --control-plane --alsologtostderr -v 5                                                               │ ha-898615                │ jenkins  │ v1.37.0 │ 09 Oct 25 19:50 UTC │                     │
	│ delete  │ -p ha-898615                                                                                                            │ ha-898615                │ jenkins  │ v1.37.0 │ 09 Oct 25 19:51 UTC │ 09 Oct 25 19:51 UTC │
	│ start   │ -p json-output-487749 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ json-output-487749       │ testUser │ v1.37.0 │ 09 Oct 25 19:51 UTC │                     │
	│ pause   │ -p json-output-487749 --output=json --user=testUser                                                                     │ json-output-487749       │ testUser │ v1.37.0 │ 09 Oct 25 19:59 UTC │ 09 Oct 25 19:59 UTC │
	│ unpause │ -p json-output-487749 --output=json --user=testUser                                                                     │ json-output-487749       │ testUser │ v1.37.0 │ 09 Oct 25 19:59 UTC │ 09 Oct 25 19:59 UTC │
	│ stop    │ -p json-output-487749 --output=json --user=testUser                                                                     │ json-output-487749       │ testUser │ v1.37.0 │ 09 Oct 25 19:59 UTC │ 09 Oct 25 19:59 UTC │
	│ delete  │ -p json-output-487749                                                                                                   │ json-output-487749       │ jenkins  │ v1.37.0 │ 09 Oct 25 19:59 UTC │ 09 Oct 25 19:59 UTC │
	│ start   │ -p json-output-error-415895 --memory=3072 --output=json --wait=true --driver=fail                                       │ json-output-error-415895 │ jenkins  │ v1.37.0 │ 09 Oct 25 19:59 UTC │                     │
	│ delete  │ -p json-output-error-415895                                                                                             │ json-output-error-415895 │ jenkins  │ v1.37.0 │ 09 Oct 25 19:59 UTC │ 09 Oct 25 19:59 UTC │
	│ start   │ -p docker-network-419382 --network=                                                                                     │ docker-network-419382    │ jenkins  │ v1.37.0 │ 09 Oct 25 19:59 UTC │ 09 Oct 25 19:59 UTC │
	│ delete  │ -p docker-network-419382                                                                                                │ docker-network-419382    │ jenkins  │ v1.37.0 │ 09 Oct 25 19:59 UTC │ 09 Oct 25 20:00 UTC │
	│ start   │ -p docker-network-755683 --network=bridge                                                                               │ docker-network-755683    │ jenkins  │ v1.37.0 │ 09 Oct 25 20:00 UTC │ 09 Oct 25 20:00 UTC │
	│ delete  │ -p docker-network-755683                                                                                                │ docker-network-755683    │ jenkins  │ v1.37.0 │ 09 Oct 25 20:00 UTC │ 09 Oct 25 20:00 UTC │
	│ start   │ -p existing-network-345563 --network=existing-network                                                                   │ existing-network-345563  │ jenkins  │ v1.37.0 │ 09 Oct 25 20:00 UTC │ 09 Oct 25 20:00 UTC │
	│ delete  │ -p existing-network-345563                                                                                              │ existing-network-345563  │ jenkins  │ v1.37.0 │ 09 Oct 25 20:00 UTC │ 09 Oct 25 20:00 UTC │
	│ start   │ -p custom-subnet-616418 --subnet=192.168.60.0/24                                                                        │ custom-subnet-616418     │ jenkins  │ v1.37.0 │ 09 Oct 25 20:00 UTC │ 09 Oct 25 20:01 UTC │
	│ delete  │ -p custom-subnet-616418                                                                                                 │ custom-subnet-616418     │ jenkins  │ v1.37.0 │ 09 Oct 25 20:01 UTC │ 09 Oct 25 20:01 UTC │
	│ start   │ -p static-ip-207010 --static-ip=192.168.200.200                                                                         │ static-ip-207010         │ jenkins  │ v1.37.0 │ 09 Oct 25 20:01 UTC │ 09 Oct 25 20:01 UTC │
	│ ip      │ static-ip-207010 ip                                                                                                     │ static-ip-207010         │ jenkins  │ v1.37.0 │ 09 Oct 25 20:01 UTC │ 09 Oct 25 20:01 UTC │
	│ delete  │ -p static-ip-207010                                                                                                     │ static-ip-207010         │ jenkins  │ v1.37.0 │ 09 Oct 25 20:01 UTC │ 09 Oct 25 20:01 UTC │
	│ start   │ -p first-227352 --driver=docker  --container-runtime=crio                                                               │ first-227352             │ jenkins  │ v1.37.0 │ 09 Oct 25 20:01 UTC │                     │
	│ delete  │ -p second-229814                                                                                                        │ second-229814            │ jenkins  │ v1.37.0 │ 09 Oct 25 20:10 UTC │ 09 Oct 25 20:10 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:01:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:01:40.347740  246149 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:01:40.348013  246149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:01:40.348017  246149 out.go:374] Setting ErrFile to fd 2...
	I1009 20:01:40.348021  246149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:01:40.348203  246149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 20:01:40.348752  246149 out.go:368] Setting JSON to false
	I1009 20:01:40.349661  246149 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6249,"bootTime":1760033851,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:01:40.349773  246149 start.go:143] virtualization: kvm guest
	I1009 20:01:40.351842  246149 out.go:179] * [first-227352] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 20:01:40.353515  246149 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:01:40.353615  246149 notify.go:221] Checking for updates...
	I1009 20:01:40.356359  246149 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:01:40.358011  246149 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 20:01:40.359413  246149 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 20:01:40.360643  246149 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:01:40.361933  246149 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:01:40.363524  246149 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:01:40.388553  246149 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 20:01:40.388689  246149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:01:40.451194  246149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 20:01:40.441244055 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 20:01:40.451287  246149 docker.go:319] overlay module found
	I1009 20:01:40.453306  246149 out.go:179] * Using the docker driver based on user configuration
	I1009 20:01:40.454536  246149 start.go:309] selected driver: docker
	I1009 20:01:40.454546  246149 start.go:930] validating driver "docker" against <nil>
	I1009 20:01:40.454558  246149 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:01:40.454664  246149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:01:40.515784  246149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 20:01:40.505938878 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 20:01:40.515946  246149 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 20:01:40.516363  246149 start_flags.go:411] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1009 20:01:40.516531  246149 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 20:01:40.518427  246149 out.go:179] * Using Docker driver with root privileges
	I1009 20:01:40.520008  246149 cni.go:84] Creating CNI manager for ""
	I1009 20:01:40.520060  246149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:01:40.520067  246149 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 20:01:40.520149  246149 start.go:353] cluster config:
	{Name:first-227352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-227352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:01:40.521713  246149 out.go:179] * Starting "first-227352" primary control-plane node in "first-227352" cluster
	I1009 20:01:40.523134  246149 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:01:40.524426  246149 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:01:40.525751  246149 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:01:40.525799  246149 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 20:01:40.525809  246149 cache.go:58] Caching tarball of preloaded images
	I1009 20:01:40.525870  246149 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:01:40.525909  246149 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:01:40.525916  246149 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:01:40.526229  246149 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/config.json ...
	I1009 20:01:40.526247  246149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/config.json: {Name:mk31fcadc4233b891c405cf2ea1b2522ce4b862a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:01:40.547364  246149 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:01:40.547406  246149 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:01:40.547427  246149 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:01:40.547460  246149 start.go:361] acquireMachinesLock for first-227352: {Name:mkb0a764839f738f75ecb13e224aa86b62f60c80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:01:40.547575  246149 start.go:365] duration metric: took 98.387µs to acquireMachinesLock for "first-227352"
	I1009 20:01:40.547602  246149 start.go:94] Provisioning new machine with config: &{Name:first-227352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-227352 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:01:40.547671  246149 start.go:126] createHost starting for "" (driver="docker")
	I1009 20:01:40.549955  246149 out.go:252] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I1009 20:01:40.550175  246149 start.go:160] libmachine.API.Create for "first-227352" (driver="docker")
	I1009 20:01:40.550198  246149 client.go:168] LocalClient.Create starting
	I1009 20:01:40.550281  246149 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
	I1009 20:01:40.550309  246149 main.go:141] libmachine: Decoding PEM data...
	I1009 20:01:40.550321  246149 main.go:141] libmachine: Parsing certificate...
	I1009 20:01:40.550391  246149 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
	I1009 20:01:40.550414  246149 main.go:141] libmachine: Decoding PEM data...
	I1009 20:01:40.550424  246149 main.go:141] libmachine: Parsing certificate...
	I1009 20:01:40.550769  246149 cli_runner.go:164] Run: docker network inspect first-227352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 20:01:40.568454  246149 cli_runner.go:211] docker network inspect first-227352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 20:01:40.568515  246149 network_create.go:284] running [docker network inspect first-227352] to gather additional debugging logs...
	I1009 20:01:40.568534  246149 cli_runner.go:164] Run: docker network inspect first-227352
	W1009 20:01:40.586858  246149 cli_runner.go:211] docker network inspect first-227352 returned with exit code 1
	I1009 20:01:40.586880  246149 network_create.go:287] error running [docker network inspect first-227352]: docker network inspect first-227352: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network first-227352 not found
	I1009 20:01:40.586895  246149 network_create.go:289] output of [docker network inspect first-227352]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network first-227352 not found
	
	** /stderr **
	I1009 20:01:40.586986  246149 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:01:40.604856  246149 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cd1531978c8a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:07:a4:4e:bd:f9} reservation:<nil>}
	I1009 20:01:40.605222  246149 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a8ee10}
	I1009 20:01:40.605244  246149 network_create.go:124] attempt to create docker network first-227352 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1009 20:01:40.605287  246149 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=first-227352 first-227352
	I1009 20:01:40.664494  246149 network_create.go:108] docker network first-227352 192.168.58.0/24 created
	I1009 20:01:40.664523  246149 kic.go:121] calculated static IP "192.168.58.2" for the "first-227352" container
	I1009 20:01:40.664586  246149 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 20:01:40.683107  246149 cli_runner.go:164] Run: docker volume create first-227352 --label name.minikube.sigs.k8s.io=first-227352 --label created_by.minikube.sigs.k8s.io=true
	I1009 20:01:40.702271  246149 oci.go:103] Successfully created a docker volume first-227352
	I1009 20:01:40.702359  246149 cli_runner.go:164] Run: docker run --rm --name first-227352-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-227352 --entrypoint /usr/bin/test -v first-227352:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 20:01:41.090721  246149 oci.go:107] Successfully prepared a docker volume first-227352
	I1009 20:01:41.090755  246149 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:01:41.090776  246149 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 20:01:41.090831  246149 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-227352:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 20:01:45.642643  246149 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-227352:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.551761985s)
	I1009 20:01:45.642671  246149 kic.go:203] duration metric: took 4.551891158s to extract preloaded images to volume ...
	W1009 20:01:45.642773  246149 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 20:01:45.642796  246149 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 20:01:45.642832  246149 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 20:01:45.704240  246149 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname first-227352 --name first-227352 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-227352 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=first-227352 --network first-227352 --ip 192.168.58.2 --volume first-227352:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 20:01:45.985226  246149 cli_runner.go:164] Run: docker container inspect first-227352 --format={{.State.Running}}
	I1009 20:01:46.003895  246149 cli_runner.go:164] Run: docker container inspect first-227352 --format={{.State.Status}}
	I1009 20:01:46.023726  246149 cli_runner.go:164] Run: docker exec first-227352 stat /var/lib/dpkg/alternatives/iptables
	I1009 20:01:46.069748  246149 oci.go:144] the created container "first-227352" has a running status.
	I1009 20:01:46.069775  246149 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/first-227352/id_rsa...
	I1009 20:01:46.358772  246149 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/first-227352/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 20:01:46.384511  246149 cli_runner.go:164] Run: docker container inspect first-227352 --format={{.State.Status}}
	I1009 20:01:46.402500  246149 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 20:01:46.402515  246149 kic_runner.go:114] Args: [docker exec --privileged first-227352 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 20:01:46.449841  246149 cli_runner.go:164] Run: docker container inspect first-227352 --format={{.State.Status}}
	I1009 20:01:46.467850  246149 machine.go:93] provisionDockerMachine start ...
	I1009 20:01:46.467935  246149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-227352
	I1009 20:01:46.485956  246149 main.go:141] libmachine: Using SSH client type: native
	I1009 20:01:46.486212  246149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1009 20:01:46.486219  246149 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:01:46.486959  246149 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35028->127.0.0.1:32828: read: connection reset by peer
	I1009 20:01:49.636788  246149 main.go:141] libmachine: SSH cmd err, output: <nil>: first-227352
	
	I1009 20:01:49.636805  246149 ubuntu.go:182] provisioning hostname "first-227352"
	I1009 20:01:49.636859  246149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-227352
	I1009 20:01:49.656111  246149 main.go:141] libmachine: Using SSH client type: native
	I1009 20:01:49.656338  246149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1009 20:01:49.656345  246149 main.go:141] libmachine: About to run SSH command:
	sudo hostname first-227352 && echo "first-227352" | sudo tee /etc/hostname
	I1009 20:01:49.813788  246149 main.go:141] libmachine: SSH cmd err, output: <nil>: first-227352
	
	I1009 20:01:49.813859  246149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-227352
	I1009 20:01:49.833196  246149 main.go:141] libmachine: Using SSH client type: native
	I1009 20:01:49.833441  246149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1009 20:01:49.833457  246149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfirst-227352' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 first-227352/g' /etc/hosts;
				else 
					echo '127.0.1.1 first-227352' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:01:49.980528  246149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:01:49.980557  246149 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
	I1009 20:01:49.980583  246149 ubuntu.go:190] setting up certificates
	I1009 20:01:49.980604  246149 provision.go:84] configureAuth start
	I1009 20:01:49.980727  246149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-227352
	I1009 20:01:49.999823  246149 provision.go:143] copyHostCerts
	I1009 20:01:49.999875  246149 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem, removing ...
	I1009 20:01:49.999883  246149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem
	I1009 20:01:49.999970  246149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
	I1009 20:01:50.000088  246149 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem, removing ...
	I1009 20:01:50.000092  246149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem
	I1009 20:01:50.000120  246149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
	I1009 20:01:50.000186  246149 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem, removing ...
	I1009 20:01:50.000189  246149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem
	I1009 20:01:50.000211  246149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
	I1009 20:01:50.000259  246149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.first-227352 san=[127.0.0.1 192.168.58.2 first-227352 localhost minikube]
	I1009 20:01:50.329765  246149 provision.go:177] copyRemoteCerts
	I1009 20:01:50.329820  246149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:01:50.329857  246149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-227352
	I1009 20:01:50.348683  246149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/first-227352/id_rsa Username:docker}
	I1009 20:01:50.453283  246149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:01:50.474108  246149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1009 20:01:50.493448  246149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:01:50.512089  246149 provision.go:87] duration metric: took 531.471468ms to configureAuth
	I1009 20:01:50.512113  246149 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:01:50.512297  246149 config.go:182] Loaded profile config "first-227352": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:01:50.512413  246149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-227352
	I1009 20:01:50.531115  246149 main.go:141] libmachine: Using SSH client type: native
	I1009 20:01:50.531327  246149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1009 20:01:50.531337  246149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:01:50.797707  246149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:01:50.797728  246149 machine.go:96] duration metric: took 4.329864396s to provisionDockerMachine
	I1009 20:01:50.797738  246149 client.go:171] duration metric: took 10.247534607s to LocalClient.Create
	I1009 20:01:50.797763  246149 start.go:168] duration metric: took 10.24759075s to libmachine.API.Create "first-227352"
	I1009 20:01:50.797771  246149 start.go:294] postStartSetup for "first-227352" (driver="docker")
	I1009 20:01:50.797781  246149 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:01:50.797847  246149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:01:50.797881  246149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-227352
	I1009 20:01:50.816093  246149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/first-227352/id_rsa Username:docker}
	I1009 20:01:50.921884  246149 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:01:50.925634  246149 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:01:50.925657  246149 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:01:50.925670  246149 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
	I1009 20:01:50.925744  246149 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
	I1009 20:01:50.925824  246149 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem -> 1415192.pem in /etc/ssl/certs
	I1009 20:01:50.925914  246149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:01:50.934286  246149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 20:01:50.956584  246149 start.go:297] duration metric: took 158.797629ms for postStartSetup
	I1009 20:01:50.957021  246149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-227352
	I1009 20:01:50.976146  246149 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/config.json ...
	I1009 20:01:50.976416  246149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:01:50.976455  246149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-227352
	I1009 20:01:50.994850  246149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/first-227352/id_rsa Username:docker}
	I1009 20:01:51.095965  246149 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:01:51.100669  246149 start.go:129] duration metric: took 10.552982359s to createHost
	I1009 20:01:51.100712  246149 start.go:84] releasing machines lock for "first-227352", held for 10.553122018s
	I1009 20:01:51.100791  246149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-227352
	I1009 20:01:51.119034  246149 ssh_runner.go:195] Run: cat /version.json
	I1009 20:01:51.119073  246149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-227352
	I1009 20:01:51.119074  246149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:01:51.119135  246149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-227352
	I1009 20:01:51.139439  246149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/first-227352/id_rsa Username:docker}
	I1009 20:01:51.139653  246149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/first-227352/id_rsa Username:docker}
	I1009 20:01:51.293307  246149 ssh_runner.go:195] Run: systemctl --version
	I1009 20:01:51.300194  246149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:01:51.338028  246149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:01:51.343017  246149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:01:51.343091  246149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:01:51.371287  246149 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:01:51.371304  246149 start.go:496] detecting cgroup driver to use...
	I1009 20:01:51.371340  246149 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 20:01:51.371407  246149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:01:51.388184  246149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:01:51.400834  246149 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:01:51.400884  246149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:01:51.417939  246149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:01:51.436254  246149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:01:51.519944  246149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:01:51.608680  246149 docker.go:234] disabling docker service ...
	I1009 20:01:51.608742  246149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:01:51.628813  246149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:01:51.642341  246149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:01:51.728725  246149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:01:51.811978  246149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:01:51.825772  246149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:01:51.841183  246149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:01:51.841241  246149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:01:51.852124  246149 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 20:01:51.852187  246149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:01:51.862024  246149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:01:51.871223  246149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:01:51.880537  246149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:01:51.889292  246149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:01:51.898773  246149 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:01:51.913441  246149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:01:51.923221  246149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:01:51.931279  246149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:01:51.939097  246149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:01:52.019134  246149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:01:52.129273  246149 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:01:52.129330  246149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:01:52.133850  246149 start.go:564] Will wait 60s for crictl version
	I1009 20:01:52.133918  246149 ssh_runner.go:195] Run: which crictl
	I1009 20:01:52.137782  246149 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:01:52.165251  246149 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:01:52.165329  246149 ssh_runner.go:195] Run: crio --version
	I1009 20:01:52.194564  246149 ssh_runner.go:195] Run: crio --version
	I1009 20:01:52.227504  246149 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:01:52.229053  246149 cli_runner.go:164] Run: docker network inspect first-227352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:01:52.248296  246149 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1009 20:01:52.252750  246149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:01:52.263708  246149 kubeadm.go:883] updating cluster {Name:first-227352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-227352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s} ...
	I1009 20:01:52.263816  246149 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:01:52.263855  246149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:01:52.297719  246149 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:01:52.297736  246149 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:01:52.297793  246149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:01:52.326648  246149 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:01:52.326685  246149 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:01:52.326692  246149 kubeadm.go:934] updating node { 192.168.58.2 8443 v1.34.1 crio true true} ...
	I1009 20:01:52.326782  246149 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=first-227352 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:first-227352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:01:52.326842  246149 ssh_runner.go:195] Run: crio config
	I1009 20:01:52.374749  246149 cni.go:84] Creating CNI manager for ""
	I1009 20:01:52.374760  246149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:01:52.374776  246149 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:01:52.374798  246149 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:first-227352 NodeName:first-227352 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:01:52.374913  246149 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "first-227352"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.58.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:01:52.374971  246149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:01:52.383847  246149 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:01:52.383904  246149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:01:52.392548  246149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1009 20:01:52.406567  246149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:01:52.423503  246149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1009 20:01:52.437513  246149 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:01:52.441537  246149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:01:52.452694  246149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:01:52.533494  246149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:01:52.558357  246149 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352 for IP: 192.168.58.2
	I1009 20:01:52.558372  246149 certs.go:195] generating shared ca certs ...
	I1009 20:01:52.558409  246149 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:01:52.558584  246149 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
	I1009 20:01:52.558615  246149 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
	I1009 20:01:52.558621  246149 certs.go:257] generating profile certs ...
	I1009 20:01:52.558675  246149 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/client.key
	I1009 20:01:52.558697  246149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/client.crt with IP's: []
	I1009 20:01:52.812699  246149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/client.crt ...
	I1009 20:01:52.812720  246149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/client.crt: {Name:mkccdd0ad2be3ff8f3fd3e043007d73c8893e2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:01:52.812928  246149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/client.key ...
	I1009 20:01:52.812944  246149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/client.key: {Name:mkb04345f5a3a4bceb4d5014fd5241b3f19e1ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:01:52.813024  246149 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/apiserver.key.6fceb3fe
	I1009 20:01:52.813035  246149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/apiserver.crt.6fceb3fe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.58.2]
	I1009 20:01:53.138658  246149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/apiserver.crt.6fceb3fe ...
	I1009 20:01:53.138682  246149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/apiserver.crt.6fceb3fe: {Name:mkc252b8d00780bbe801c20de273ac339c617a48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:01:53.138876  246149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/apiserver.key.6fceb3fe ...
	I1009 20:01:53.138886  246149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/apiserver.key.6fceb3fe: {Name:mk21c9f91a407846fcfd778452124a9f1ef79cfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:01:53.138966  246149 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/apiserver.crt.6fceb3fe -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/apiserver.crt
	I1009 20:01:53.139072  246149 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/apiserver.key.6fceb3fe -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/apiserver.key
	I1009 20:01:53.139133  246149 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/proxy-client.key
	I1009 20:01:53.139145  246149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/proxy-client.crt with IP's: []
	I1009 20:01:53.509251  246149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/proxy-client.crt ...
	I1009 20:01:53.509274  246149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/proxy-client.crt: {Name:mkff3fe342e86e64c8648b7a8c8e4c9bf352c096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:01:53.509493  246149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/proxy-client.key ...
	I1009 20:01:53.509502  246149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/proxy-client.key: {Name:mk8c49ff0c7358d1091e615912b62af88774b3da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:01:53.509694  246149 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem (1338 bytes)
	W1009 20:01:53.509725  246149 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519_empty.pem, impossibly tiny 0 bytes
	I1009 20:01:53.509733  246149 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 20:01:53.509754  246149 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:01:53.509774  246149 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:01:53.509792  246149 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
	I1009 20:01:53.509824  246149 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem (1708 bytes)
	I1009 20:01:53.510365  246149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:01:53.529580  246149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:01:53.548130  246149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:01:53.566116  246149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 20:01:53.585400  246149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 20:01:53.603593  246149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:01:53.622535  246149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:01:53.641448  246149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/first-227352/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:01:53.660415  246149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/ssl/certs/1415192.pem --> /usr/share/ca-certificates/1415192.pem (1708 bytes)
	I1009 20:01:53.681797  246149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:01:53.699774  246149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/141519.pem --> /usr/share/ca-certificates/141519.pem (1338 bytes)
	I1009 20:01:53.717837  246149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:01:53.731543  246149 ssh_runner.go:195] Run: openssl version
	I1009 20:01:53.738143  246149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1415192.pem && ln -fs /usr/share/ca-certificates/1415192.pem /etc/ssl/certs/1415192.pem"
	I1009 20:01:53.747312  246149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1415192.pem
	I1009 20:01:53.751462  246149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:56 /usr/share/ca-certificates/1415192.pem
	I1009 20:01:53.751528  246149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1415192.pem
	I1009 20:01:53.786173  246149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1415192.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:01:53.795661  246149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:01:53.805045  246149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:01:53.809122  246149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:01:53.809169  246149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:01:53.843887  246149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:01:53.853200  246149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141519.pem && ln -fs /usr/share/ca-certificates/141519.pem /etc/ssl/certs/141519.pem"
	I1009 20:01:53.861759  246149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141519.pem
	I1009 20:01:53.866222  246149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:56 /usr/share/ca-certificates/141519.pem
	I1009 20:01:53.866280  246149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141519.pem
	I1009 20:01:53.900583  246149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141519.pem /etc/ssl/certs/51391683.0"
	I1009 20:01:53.909811  246149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:01:53.913520  246149 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 20:01:53.913564  246149 kubeadm.go:400] StartCluster: {Name:first-227352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-227352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Aut
oPauseInterval:1m0s}
	I1009 20:01:53.913625  246149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:01:53.913664  246149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:01:53.942509  246149 cri.go:89] found id: ""
	I1009 20:01:53.942583  246149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:01:53.951123  246149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:01:53.959318  246149 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 20:01:53.959397  246149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:01:53.968006  246149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:01:53.968017  246149 kubeadm.go:157] found existing configuration files:
	
	I1009 20:01:53.968061  246149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:01:53.976088  246149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:01:53.976134  246149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:01:53.984152  246149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:01:53.992187  246149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:01:53.992231  246149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:01:53.999826  246149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:01:54.007539  246149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:01:54.007590  246149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:01:54.014933  246149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:01:54.022656  246149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:01:54.022705  246149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:01:54.030172  246149 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 20:01:54.067448  246149 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 20:01:54.067515  246149 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 20:01:54.104879  246149 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 20:01:54.104945  246149 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 20:01:54.104985  246149 kubeadm.go:318] OS: Linux
	I1009 20:01:54.105062  246149 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 20:01:54.105118  246149 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 20:01:54.105157  246149 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 20:01:54.105223  246149 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 20:01:54.105286  246149 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 20:01:54.105414  246149 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 20:01:54.105486  246149 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 20:01:54.105539  246149 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 20:01:54.170593  246149 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:01:54.170727  246149 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:01:54.170879  246149 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:01:54.179188  246149 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:01:54.183003  246149 out.go:252]   - Generating certificates and keys ...
	I1009 20:01:54.183093  246149 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 20:01:54.183179  246149 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 20:01:54.340352  246149 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 20:01:54.432486  246149 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 20:01:54.595336  246149 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 20:01:54.661664  246149 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 20:01:55.118442  246149 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 20:01:55.118593  246149 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [first-227352 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1009 20:01:55.810627  246149 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 20:01:55.810806  246149 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [first-227352 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1009 20:01:56.205179  246149 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 20:01:56.514303  246149 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 20:01:57.106776  246149 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 20:01:57.106877  246149 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:01:57.333875  246149 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:01:57.436512  246149 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:01:57.818172  246149 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:01:57.959191  246149 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:01:58.186600  246149 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:01:58.187081  246149 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:01:58.191058  246149 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:01:58.192426  246149 out.go:252]   - Booting up control plane ...
	I1009 20:01:58.192536  246149 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:01:58.192637  246149 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:01:58.193240  246149 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:01:58.206745  246149 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:01:58.206866  246149 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 20:01:58.213681  246149 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 20:01:58.213941  246149 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:01:58.213988  246149 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 20:01:58.313992  246149 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:01:58.314143  246149 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:01:58.814826  246149 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.937704ms
	I1009 20:01:58.818635  246149 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 20:01:58.818792  246149 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1009 20:01:58.818891  246149 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 20:01:58.818964  246149 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 20:05:58.819695  246149 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000367886s
	I1009 20:05:58.819861  246149 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000422138s
	I1009 20:05:58.820211  246149 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000684111s
	I1009 20:05:58.820242  246149 kubeadm.go:318] 
	I1009 20:05:58.820523  246149 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 20:05:58.820728  246149 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:05:58.820942  246149 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 20:05:58.821183  246149 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:05:58.821287  246149 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 20:05:58.821406  246149 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:05:58.821412  246149 kubeadm.go:318] 
	I1009 20:05:58.824746  246149 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 20:05:58.824934  246149 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:05:58.825753  246149 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 20:05:58.825841  246149 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 20:05:58.826053  246149 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-227352 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-227352 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.937704ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000367886s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000422138s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000684111s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 20:05:58.826154  246149 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:05:59.266109  246149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:05:59.279145  246149 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 20:05:59.279192  246149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:05:59.287879  246149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:05:59.287891  246149 kubeadm.go:157] found existing configuration files:
	
	I1009 20:05:59.287946  246149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:05:59.296413  246149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:05:59.296466  246149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:05:59.304347  246149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:05:59.312256  246149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:05:59.312306  246149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:05:59.320167  246149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:05:59.328326  246149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:05:59.328375  246149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:05:59.336185  246149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:05:59.344113  246149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:05:59.344172  246149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:05:59.352108  246149 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 20:05:59.389904  246149 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 20:05:59.389948  246149 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 20:05:59.411443  246149 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 20:05:59.411549  246149 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 20:05:59.411583  246149 kubeadm.go:318] OS: Linux
	I1009 20:05:59.411619  246149 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 20:05:59.411657  246149 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 20:05:59.411706  246149 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 20:05:59.411804  246149 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 20:05:59.411874  246149 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 20:05:59.411945  246149 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 20:05:59.412002  246149 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 20:05:59.412046  246149 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 20:05:59.472772  246149 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:05:59.472967  246149 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:05:59.473108  246149 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:05:59.479848  246149 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:05:59.483488  246149 out.go:252]   - Generating certificates and keys ...
	I1009 20:05:59.483587  246149 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 20:05:59.483706  246149 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 20:05:59.483769  246149 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:05:59.483815  246149 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:05:59.483891  246149 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:05:59.483949  246149 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 20:05:59.484019  246149 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:05:59.484079  246149 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:05:59.484153  246149 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:05:59.484215  246149 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:05:59.484252  246149 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 20:05:59.484295  246149 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:05:59.502467  246149 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:05:59.603696  246149 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:05:59.982257  246149 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:06:00.173884  246149 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:06:00.604771  246149 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:06:00.605235  246149 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:06:00.607624  246149 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:06:00.609716  246149 out.go:252]   - Booting up control plane ...
	I1009 20:06:00.609827  246149 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:06:00.609927  246149 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:06:00.610557  246149 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:06:00.625009  246149 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:06:00.625148  246149 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 20:06:00.633302  246149 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 20:06:00.633465  246149 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:06:00.633524  246149 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 20:06:00.738362  246149 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:06:00.738480  246149 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:06:01.239363  246149 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.098333ms
	I1009 20:06:01.242353  246149 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 20:06:01.242527  246149 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1009 20:06:01.242621  246149 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 20:06:01.242718  246149 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 20:10:01.244171  246149 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001148611s
	I1009 20:10:01.244525  246149 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001186548s
	I1009 20:10:01.244701  246149 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001177739s
	I1009 20:10:01.244713  246149 kubeadm.go:318] 
	I1009 20:10:01.244875  246149 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 20:10:01.245111  246149 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:10:01.245353  246149 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 20:10:01.245586  246149 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:10:01.245727  246149 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 20:10:01.245839  246149 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:10:01.245843  246149 kubeadm.go:318] 
	I1009 20:10:01.248717  246149 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 20:10:01.248832  246149 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:10:01.249336  246149 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 20:10:01.249479  246149 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:10:01.249585  246149 kubeadm.go:402] duration metric: took 8m7.336024195s to StartCluster
	I1009 20:10:01.249638  246149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:10:01.249705  246149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:10:01.279052  246149 cri.go:89] found id: ""
	I1009 20:10:01.279080  246149 logs.go:282] 0 containers: []
	W1009 20:10:01.279090  246149 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:10:01.279098  246149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:10:01.279168  246149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:10:01.306897  246149 cri.go:89] found id: ""
	I1009 20:10:01.306914  246149 logs.go:282] 0 containers: []
	W1009 20:10:01.306920  246149 logs.go:284] No container was found matching "etcd"
	I1009 20:10:01.306925  246149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:10:01.306971  246149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:10:01.334132  246149 cri.go:89] found id: ""
	I1009 20:10:01.334157  246149 logs.go:282] 0 containers: []
	W1009 20:10:01.334164  246149 logs.go:284] No container was found matching "coredns"
	I1009 20:10:01.334169  246149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:10:01.334217  246149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:10:01.363046  246149 cri.go:89] found id: ""
	I1009 20:10:01.363062  246149 logs.go:282] 0 containers: []
	W1009 20:10:01.363069  246149 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:10:01.363076  246149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:10:01.363132  246149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:10:01.390651  246149 cri.go:89] found id: ""
	I1009 20:10:01.390671  246149 logs.go:282] 0 containers: []
	W1009 20:10:01.390681  246149 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:10:01.390688  246149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:10:01.390749  246149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:10:01.420172  246149 cri.go:89] found id: ""
	I1009 20:10:01.420189  246149 logs.go:282] 0 containers: []
	W1009 20:10:01.420195  246149 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:10:01.420201  246149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:10:01.420249  246149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:10:01.448077  246149 cri.go:89] found id: ""
	I1009 20:10:01.448093  246149 logs.go:282] 0 containers: []
	W1009 20:10:01.448100  246149 logs.go:284] No container was found matching "kindnet"
	I1009 20:10:01.448110  246149 logs.go:123] Gathering logs for container status ...
	I1009 20:10:01.448121  246149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:10:01.478224  246149 logs.go:123] Gathering logs for kubelet ...
	I1009 20:10:01.478243  246149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:10:01.542174  246149 logs.go:123] Gathering logs for dmesg ...
	I1009 20:10:01.542198  246149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:10:01.555205  246149 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:10:01.555223  246149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:10:01.619240  246149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 20:10:01.611696    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:10:01.612269    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:10:01.613905    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:10:01.614395    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:10:01.615910    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 20:10:01.611696    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:10:01.612269    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:10:01.613905    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:10:01.614395    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:10:01.615910    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:10:01.619252  246149 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:10:01.619264  246149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1009 20:10:01.680210  246149 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.098333ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001148611s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001186548s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001177739s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 20:10:01.680273  246149 out.go:285] * 
	W1009 20:10:01.680354  246149 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.098333ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001148611s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001186548s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001177739s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:10:01.680374  246149 out.go:285] * 
	W1009 20:10:01.682164  246149 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:10:01.686414  246149 out.go:203] 
	W1009 20:10:01.688278  246149 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.098333ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001148611s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001186548s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001177739s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:10:01.688300  246149 out.go:285] * 
	I1009 20:10:01.690980  246149 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 20:09:50 first-227352 crio[775]: time="2025-10-09T20:09:50.020925191Z" level=info msg="createCtr: deleting container fa57ed586b9b0879b37f4d8cc12d255fe47b7250937636d02a4f5103ddf23212 from storage" id=1ad54d4a-3427-4d17-95bb-a954deca2c67 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:09:50 first-227352 crio[775]: time="2025-10-09T20:09:50.023327656Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-first-227352_kube-system_3365830a1a89fc93ab682b48c2ddecb0_0" id=8c282ad8-d6e4-4024-a9a8-040d1a7a5c7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:09:50 first-227352 crio[775]: time="2025-10-09T20:09:50.0236522Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-first-227352_kube-system_1755f9b9d27c2d1e0bb759926d325ef9_0" id=1ad54d4a-3427-4d17-95bb-a954deca2c67 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:09:54 first-227352 crio[775]: time="2025-10-09T20:09:54.994457574Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=c761df6f-41bb-489d-be8c-a92a64eef69c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:09:54 first-227352 crio[775]: time="2025-10-09T20:09:54.996419945Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=f46fc93c-9587-41d8-86d4-78e8c59103f5 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:09:54 first-227352 crio[775]: time="2025-10-09T20:09:54.997394985Z" level=info msg="Creating container: kube-system/kube-scheduler-first-227352/kube-scheduler" id=6586cd0c-fd02-49d1-bf37-781be1324925 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:09:54 first-227352 crio[775]: time="2025-10-09T20:09:54.997680299Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:09:55 first-227352 crio[775]: time="2025-10-09T20:09:55.000891294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:09:55 first-227352 crio[775]: time="2025-10-09T20:09:55.001339966Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:09:55 first-227352 crio[775]: time="2025-10-09T20:09:55.015924297Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6586cd0c-fd02-49d1-bf37-781be1324925 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:09:55 first-227352 crio[775]: time="2025-10-09T20:09:55.017446532Z" level=info msg="createCtr: deleting container ID 2a94c95a2c9688b030c60fe9656091ada0ec812f9739ebba4bef18a196f38a86 from idIndex" id=6586cd0c-fd02-49d1-bf37-781be1324925 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:09:55 first-227352 crio[775]: time="2025-10-09T20:09:55.0174888Z" level=info msg="createCtr: removing container 2a94c95a2c9688b030c60fe9656091ada0ec812f9739ebba4bef18a196f38a86" id=6586cd0c-fd02-49d1-bf37-781be1324925 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:09:55 first-227352 crio[775]: time="2025-10-09T20:09:55.017525507Z" level=info msg="createCtr: deleting container 2a94c95a2c9688b030c60fe9656091ada0ec812f9739ebba4bef18a196f38a86 from storage" id=6586cd0c-fd02-49d1-bf37-781be1324925 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:09:55 first-227352 crio[775]: time="2025-10-09T20:09:55.019772461Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-first-227352_kube-system_5045c5e6a6dd1ca80e54b9613758fcad_0" id=6586cd0c-fd02-49d1-bf37-781be1324925 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:09:57 first-227352 crio[775]: time="2025-10-09T20:09:57.994081592Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=e1893473-6a3a-4029-9ff1-14d4cf9407a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:09:57 first-227352 crio[775]: time="2025-10-09T20:09:57.995072909Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=3ff5a833-d01e-4328-9798-6e6efbaaa3db name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:09:57 first-227352 crio[775]: time="2025-10-09T20:09:57.996065599Z" level=info msg="Creating container: kube-system/kube-controller-manager-first-227352/kube-controller-manager" id=cbf5998c-b815-412e-a62c-d0e5f3baf105 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:09:57 first-227352 crio[775]: time="2025-10-09T20:09:57.996291034Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:09:57 first-227352 crio[775]: time="2025-10-09T20:09:57.999788219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:09:58 first-227352 crio[775]: time="2025-10-09T20:09:58.000224932Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:09:58 first-227352 crio[775]: time="2025-10-09T20:09:58.02118145Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=cbf5998c-b815-412e-a62c-d0e5f3baf105 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:09:58 first-227352 crio[775]: time="2025-10-09T20:09:58.02266511Z" level=info msg="createCtr: deleting container ID 15326931a6d1de22264eb435008b889d3ba1d02a50c7fb66509c26fb306f8f1c from idIndex" id=cbf5998c-b815-412e-a62c-d0e5f3baf105 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:09:58 first-227352 crio[775]: time="2025-10-09T20:09:58.022763456Z" level=info msg="createCtr: removing container 15326931a6d1de22264eb435008b889d3ba1d02a50c7fb66509c26fb306f8f1c" id=cbf5998c-b815-412e-a62c-d0e5f3baf105 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:09:58 first-227352 crio[775]: time="2025-10-09T20:09:58.022801665Z" level=info msg="createCtr: deleting container 15326931a6d1de22264eb435008b889d3ba1d02a50c7fb66509c26fb306f8f1c from storage" id=cbf5998c-b815-412e-a62c-d0e5f3baf105 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:09:58 first-227352 crio[775]: time="2025-10-09T20:09:58.025049581Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-first-227352_kube-system_b932dbe2c3fc5facd826328fc92e7e0d_0" id=cbf5998c-b815-412e-a62c-d0e5f3baf105 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 20:10:02.857680    2560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:10:02.858165    2560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:10:02.859854    2560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:10:02.860317    2560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:10:02.861851    2560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001886] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405605] i8042: Warning: Keylock active
	[  +0.012107] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003945] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000985] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000728] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000956] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000773] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000743] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.513842] block sda: the capability attribute has been deprecated.
	[  +0.105193] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026341] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.057710] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:10:02 up  1:52,  0 user,  load average: 0.00, 0.16, 0.38
	Linux first-227352 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 20:09:50 first-227352 kubelet[1800]: E1009 20:09:50.653913    1800 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.58.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.58.2:8443: connect: connection refused" event="&Event{ObjectMeta:{first-227352.186ceb621af4cf53  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:first-227352,UID:first-227352,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node first-227352 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:first-227352,},FirstTimestamp:2025-10-09 20:06:00.986595155 +0000 UTC m=+0.247393336,LastTimestamp:2025-10-09 20:06:00.986595155 +0000 UTC m=+0.247393336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:first-227352,}"
	Oct 09 20:09:50 first-227352 kubelet[1800]: I1009 20:09:50.774535    1800 kubelet_node_status.go:75] "Attempting to register node" node="first-227352"
	Oct 09 20:09:50 first-227352 kubelet[1800]: E1009 20:09:50.775000    1800 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.58.2:8443/api/v1/nodes\": dial tcp 192.168.58.2:8443: connect: connection refused" node="first-227352"
	Oct 09 20:09:51 first-227352 kubelet[1800]: E1009 20:09:51.009493    1800 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"first-227352\" not found"
	Oct 09 20:09:54 first-227352 kubelet[1800]: E1009 20:09:54.993978    1800 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-227352\" not found" node="first-227352"
	Oct 09 20:09:55 first-227352 kubelet[1800]: E1009 20:09:55.020125    1800 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 20:09:55 first-227352 kubelet[1800]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 20:09:55 first-227352 kubelet[1800]:  > podSandboxID="3e90dc9889b92611c5cea7606529fce58282d96a786a46d5038ba07aeb0d3bd7"
	Oct 09 20:09:55 first-227352 kubelet[1800]: E1009 20:09:55.020235    1800 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 20:09:55 first-227352 kubelet[1800]:         container kube-scheduler start failed in pod kube-scheduler-first-227352_kube-system(5045c5e6a6dd1ca80e54b9613758fcad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 20:09:55 first-227352 kubelet[1800]:  > logger="UnhandledError"
	Oct 09 20:09:55 first-227352 kubelet[1800]: E1009 20:09:55.020271    1800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-first-227352" podUID="5045c5e6a6dd1ca80e54b9613758fcad"
	Oct 09 20:09:57 first-227352 kubelet[1800]: E1009 20:09:57.616102    1800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.58.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/first-227352?timeout=10s\": dial tcp 192.168.58.2:8443: connect: connection refused" interval="7s"
	Oct 09 20:09:57 first-227352 kubelet[1800]: I1009 20:09:57.776933    1800 kubelet_node_status.go:75] "Attempting to register node" node="first-227352"
	Oct 09 20:09:57 first-227352 kubelet[1800]: E1009 20:09:57.777333    1800 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.58.2:8443/api/v1/nodes\": dial tcp 192.168.58.2:8443: connect: connection refused" node="first-227352"
	Oct 09 20:09:57 first-227352 kubelet[1800]: E1009 20:09:57.993572    1800 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-227352\" not found" node="first-227352"
	Oct 09 20:09:58 first-227352 kubelet[1800]: E1009 20:09:58.025420    1800 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 20:09:58 first-227352 kubelet[1800]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 20:09:58 first-227352 kubelet[1800]:  > podSandboxID="d176cf5e4d0ca533171b3886a8da6454f5dd7e6d3360db278c095b62070b47f1"
	Oct 09 20:09:58 first-227352 kubelet[1800]: E1009 20:09:58.025540    1800 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 20:09:58 first-227352 kubelet[1800]:         container kube-controller-manager start failed in pod kube-controller-manager-first-227352_kube-system(b932dbe2c3fc5facd826328fc92e7e0d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 20:09:58 first-227352 kubelet[1800]:  > logger="UnhandledError"
	Oct 09 20:09:58 first-227352 kubelet[1800]: E1009 20:09:58.025582    1800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-first-227352" podUID="b932dbe2c3fc5facd826328fc92e7e0d"
	Oct 09 20:10:00 first-227352 kubelet[1800]: E1009 20:10:00.655081    1800 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.58.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.58.2:8443: connect: connection refused" event="&Event{ObjectMeta:{first-227352.186ceb621af4cf53  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:first-227352,UID:first-227352,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node first-227352 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:first-227352,},FirstTimestamp:2025-10-09 20:06:00.986595155 +0000 UTC m=+0.247393336,LastTimestamp:2025-10-09 20:06:00.986595155 +0000 UTC m=+0.247393336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:first-227352,}"
	Oct 09 20:10:01 first-227352 kubelet[1800]: E1009 20:10:01.009867    1800 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"first-227352\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p first-227352 -n first-227352
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p first-227352 -n first-227352: exit status 6 (309.483168ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:10:03.253327  251570 status.go:458] kubeconfig endpoint: get endpoint: "first-227352" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "first-227352" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "first-227352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-227352
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-227352: (1.937379796s)
--- FAIL: TestMinikubeProfile (504.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (7200.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-418542
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-418542-m01 --driver=docker  --container-runtime=crio
E1009 20:35:00.266566  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:38:37.180323  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 2h0m0s
	running tests:
		TestMultiNode (28m47s)
		TestMultiNode/serial (28m47s)
		TestMultiNode/serial/ValidateNameConflict (5m17s)

                                                
                                                
goroutine 2094 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 29 minutes]:
testing.(*T).Run(0xc000505180, {0x32044ee?, 0xc00071fa88?}, 0x3c52d60)
	/usr/local/go/src/testing/testing.go:1859 +0x431
testing.runTests.func1(0xc000505180)
	/usr/local/go/src/testing/testing.go:2279 +0x37
testing.tRunner(0xc000505180, 0xc00071fbc8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
testing.runTests(0xc0000100d8, {0x5c636c0, 0x2c, 0x2c}, {0xffffffffffffffff?, 0xc0004c01a0?, 0x5c8bdc0?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc000677180)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000677180)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xdb
main.main()
	_testmain.go:133 +0xa8

                                                
                                                
goroutine 130 [chan receive, 119 minutes]:
testing.(*T).Parallel(0xc000102e00)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000102e00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestOffline(0xc000102e00)
	/home/jenkins/workspace/Build_Cross/test/integration/aab_offline_test.go:32 +0x39
testing.tRunner(0xc000102e00, 0x3c52d78)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 152 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000103180)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000103180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestCertOptions(0xc000103180)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:36 +0xb3
testing.tRunner(0xc000103180, 0x3c52c78)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 235 [IO wait, 102 minutes]:
internal/poll.runtime_pollWait(0x773306c66850, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00050f980?, 0x900000036?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00050f980)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc00050f980)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0007928c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc0007928c0)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc00054ab00, {0x3f9cdd0, 0xc0007928c0})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc00054ab00)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 232
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x129

                                                
                                                
goroutine 153 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000103340)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000103340)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestCertExpiration(0xc000103340)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc000103340, 0x3c52c70)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 155 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000582e00)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000582e00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc000582e00)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:83 +0xb3
testing.tRunner(0xc000582e00, 0x3c52cc0)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 156 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000582fc0)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000582fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc000582fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:146 +0xb3
testing.tRunner(0xc000582fc0, 0x3c52cb8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 158 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000583880)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000583880)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestKVMDriverInstallOrUpdate(0xc000583880)
	/home/jenkins/workspace/Build_Cross/test/integration/driver_install_or_update_test.go:48 +0xb3
testing.tRunner(0xc000583880, 0x3c52d08)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 1860 [chan receive, 29 minutes]:
testing.(*T).Run(0xc001544fc0, {0x31f4138?, 0x1a3185c5000?}, 0xc000b45890)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode(0xc001544fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:59 +0x3c5
testing.tRunner(0xc001544fc0, 0x3c52d60)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 1863 [chan receive, 5 minutes]:
testing.(*T).Run(0xc001545180, {0x321907a?, 0x4097904?}, 0xc000b80100)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode.func1(0xc001545180)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:84 +0x17d
testing.tRunner(0xc001545180, 0xc000b45890)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1860
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2090 [syscall, 5 minutes]:
syscall.Syscall6(0xf7, 0x3, 0xd, 0xc00071da08, 0x4, 0xc00213a120, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
internal/syscall/unix.Waitid(0xc00071da36?, 0xc00071db60?, 0x5930ab?, 0x7ffd8740c1ab?, 0x0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x39
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:106
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:251
os.(*Process).pidfdWait(0xc000702018?)
	/usr/local/go/src/os/pidfd_linux.go:105 +0x209
os.(*Process).wait(0xc000100008?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000784180)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc000784180)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc000719880, 0xc000784180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateNameConflict({0x3faf4f0, 0xc000358b60}, 0xc000719880, {0xc0004a6920, 0x10})
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:464 +0x48d
k8s.io/minikube/test/integration.TestMultiNode.func1.1(0xc000719880?)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:86 +0x6b
testing.tRunner(0xc000719880, 0xc000b80100)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1863
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 508 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 507
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 507 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3faf870, 0xc00079c070}, 0xc001531750, 0xc0000cdf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3faf870, 0xc00079c070}, 0x70?, 0xc001531750, 0xc001531798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3faf870?, 0xc00079c070?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593245?, 0xc00017fc80?, 0xc0016aae70?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 533
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 533 [chan receive, 75 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc0021487e0, 0xc00079c070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 399
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 704 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc0016a6300, 0xc0016ec0e0)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 703
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 506 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008abe90, 0x23)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc00092bce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3fc5360)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021487e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0xc000714780?, 0x481f72?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3faf870?, 0xc00079c070?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3faf870, 0xc00079c070}, 0xc00092bf50, {0x3f66880, 0xc000716120}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f66880?, 0xc000716120?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001f12010, 0x3b9aca00, 0x0, 0x1, 0xc00079c070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 533
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 449 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc000bf6900, 0xc0015008c0)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 390
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 2078 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x773306c66b98, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc002148300?, 0xc00052f746?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002148300, {0xc00052f746, 0x8ba, 0x8ba})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000596038, {0xc00052f746?, 0x41835f?, 0x2c43f20?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc000b3a240, {0x3f64c80, 0xc0005a4008})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f64e00, 0xc000b3a240}, {0x3f64c80, 0xc0005a4008}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000596038?, {0x3f64e00, 0xc000b3a240})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000596038, {0x3f64e00, 0xc000b3a240})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f64e00, 0xc000b3a240}, {0x3f64d00, 0xc000596038}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0016ee230?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2090
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 532 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3fc1f60, {{0x3fb6f88, 0xc0002483c0?}, 0xc000027d70?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 399
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 2079 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000784180, 0xc0016ee230)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2090
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 682 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001f7f200, 0xc0016efc70)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 681
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 2077 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x773306c66620, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc002148240?, 0xc000b50a91?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002148240, {0xc000b50a91, 0x56f, 0x56f})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000596020, {0xc000b50a91?, 0x41835f?, 0x2c43f20?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc000b3a210, {0x3f64c80, 0xc0000bc730})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f64e00, 0xc000b3a210}, {0x3f64c80, 0xc0000bc730}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000596020?, {0x3f64e00, 0xc000b3a210})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000596020, {0x3f64e00, 0xc000b3a210})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f64e00, 0xc000b3a210}, {0x3f64d00, 0xc000596020}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc000792340?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2090
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                    

Test pass (92/166)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.26
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.27
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.47
21 TestBinaryMirror 0.86
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
39 TestErrorSpam/start 0.69
40 TestErrorSpam/status 0.89
41 TestErrorSpam/pause 1.34
42 TestErrorSpam/unpause 1.37
43 TestErrorSpam/stop 1.41
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
50 TestFunctional/serial/KubeContext 0.05
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.08
55 TestFunctional/serial/CacheCmd/cache/add_local 1.75
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
57 TestFunctional/serial/CacheCmd/cache/list 0.05
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
60 TestFunctional/serial/CacheCmd/cache/delete 0.1
65 TestFunctional/serial/LogsCmd 0.94
66 TestFunctional/serial/LogsFileCmd 0.96
69 TestFunctional/parallel/ConfigCmd 0.42
71 TestFunctional/parallel/DryRun 0.5
72 TestFunctional/parallel/InternationalLanguage 0.23
78 TestFunctional/parallel/AddonsCmd 0.18
81 TestFunctional/parallel/SSHCmd 0.65
82 TestFunctional/parallel/CpCmd 2.06
84 TestFunctional/parallel/FileSync 0.27
85 TestFunctional/parallel/CertSync 1.77
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
93 TestFunctional/parallel/License 0.46
102 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
106 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
107 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
108 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
109 TestFunctional/parallel/Version/short 0.06
110 TestFunctional/parallel/Version/components 0.52
111 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
112 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
113 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
114 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
115 TestFunctional/parallel/ImageCommands/ImageBuild 3.79
116 TestFunctional/parallel/ImageCommands/Setup 1.54
118 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
119 TestFunctional/parallel/ProfileCmd/profile_list 0.39
121 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
125 TestFunctional/parallel/MountCmd/specific-port 2
126 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
129 TestFunctional/parallel/MountCmd/VerifyCleanup 1.97
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/delete_echo-server_images 0.04
135 TestFunctional/delete_my-image_image 0.02
136 TestFunctional/delete_minikube_cached_images 0.02
164 TestJSONOutput/start/Audit 0
169 TestJSONOutput/pause/Command 0.49
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.47
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 1.24
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.22
188 TestKicCustomNetwork/create_custom_network 28.31
189 TestKicCustomNetwork/use_default_bridge_network 25.77
190 TestKicExistingNetwork 24.48
191 TestKicCustomSubnet 25.49
192 TestKicStaticIP 24.22
193 TestMainNoArgs 0.05
197 TestMountStart/serial/StartWithMountFirst 5.85
198 TestMountStart/serial/VerifyMountFirst 0.28
199 TestMountStart/serial/StartWithMountSecond 5.69
200 TestMountStart/serial/VerifyMountSecond 0.28
201 TestMountStart/serial/DeleteFirst 1.68
202 TestMountStart/serial/VerifyMountPostDelete 0.27
203 TestMountStart/serial/Stop 1.2
204 TestMountStart/serial/RestartStopped 7.41
205 TestMountStart/serial/VerifyMountPostStop 0.27
x
+
TestDownloadOnly/v1.28.0/json-events (5.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-681935 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-681935 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.260379381s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1009 18:39:22.541646  141519 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1009 18:39:22.541821  141519 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-681935
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-681935: exit status 85 (70.18325ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-681935 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-681935 │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:39:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:39:17.328281  141531 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:39:17.328573  141531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:39:17.328583  141531 out.go:374] Setting ErrFile to fd 2...
	I1009 18:39:17.328588  141531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:39:17.328800  141531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	W1009 18:39:17.328951  141531 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21683-137890/.minikube/config/config.json: open /home/jenkins/minikube-integration/21683-137890/.minikube/config/config.json: no such file or directory
	I1009 18:39:17.329490  141531 out.go:368] Setting JSON to true
	I1009 18:39:17.330676  141531 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1306,"bootTime":1760033851,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:39:17.330778  141531 start.go:143] virtualization: kvm guest
	I1009 18:39:17.333538  141531 out.go:99] [download-only-681935] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1009 18:39:17.333684  141531 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball: no such file or directory
	I1009 18:39:17.333735  141531 notify.go:221] Checking for updates...
	I1009 18:39:17.335830  141531 out.go:171] MINIKUBE_LOCATION=21683
	I1009 18:39:17.337741  141531 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:39:17.339439  141531 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 18:39:17.341319  141531 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 18:39:17.342928  141531 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1009 18:39:17.345693  141531 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 18:39:17.346110  141531 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 18:39:17.372394  141531 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:39:17.372552  141531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:39:17.805133  141531 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-09 18:39:17.794047883 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:39:17.805262  141531 docker.go:319] overlay module found
	I1009 18:39:17.806952  141531 out.go:99] Using the docker driver based on user configuration
	I1009 18:39:17.807003  141531 start.go:309] selected driver: docker
	I1009 18:39:17.807012  141531 start.go:930] validating driver "docker" against <nil>
	I1009 18:39:17.807100  141531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:39:17.872195  141531 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-09 18:39:17.861228748 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:39:17.872359  141531 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 18:39:17.872917  141531 start_flags.go:411] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1009 18:39:17.873085  141531 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:39:17.875373  141531 out.go:171] Using Docker driver with root privileges
	I1009 18:39:17.876870  141531 cni.go:84] Creating CNI manager for ""
	I1009 18:39:17.876921  141531 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:39:17.876941  141531 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:39:17.877023  141531 start.go:353] cluster config:
	{Name:download-only-681935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-681935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:39:17.878690  141531 out.go:99] Starting "download-only-681935" primary control-plane node in "download-only-681935" cluster
	I1009 18:39:17.878715  141531 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 18:39:17.880061  141531 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:39:17.880093  141531 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 18:39:17.880239  141531 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:39:17.897887  141531 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 18:39:17.898268  141531 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1009 18:39:17.898294  141531 cache.go:58] Caching tarball of preloaded images
	I1009 18:39:17.898718  141531 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1009 18:39:17.898772  141531 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 18:39:17.898845  141531 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 18:39:17.900598  141531 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1009 18:39:17.900625  141531 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1009 18:39:17.927287  141531 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1009 18:39:17.927445  141531 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1009 18:39:21.733630  141531 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1009 18:39:21.733981  141531 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/download-only-681935/config.json ...
	I1009 18:39:21.734020  141531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/download-only-681935/config.json: {Name:mkb2b30ac6e0c0ba51e5fa7e93af21a4d4186b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:21.734904  141531 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 18:39:21.735120  141531 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21683-137890/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-681935 host does not exist
	  To start a cluster, run: "minikube start -p download-only-681935"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-681935
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-484045 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-484045 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.266359084s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1009 18:39:27.257186  141519 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1009 18:39:27.257229  141519 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-484045
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-484045: exit status 85 (70.174143ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-681935 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-681935 │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ delete  │ -p download-only-681935                                                                                                                                                   │ download-only-681935 │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ start   │ -o=json --download-only -p download-only-484045 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-484045 │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:39:23
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:39:23.036844  141903 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:39:23.037149  141903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:39:23.037162  141903 out.go:374] Setting ErrFile to fd 2...
	I1009 18:39:23.037167  141903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:39:23.037395  141903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 18:39:23.037964  141903 out.go:368] Setting JSON to true
	I1009 18:39:23.039088  141903 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1312,"bootTime":1760033851,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:39:23.039197  141903 start.go:143] virtualization: kvm guest
	I1009 18:39:23.041316  141903 out.go:99] [download-only-484045] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:39:23.041494  141903 notify.go:221] Checking for updates...
	I1009 18:39:23.043074  141903 out.go:171] MINIKUBE_LOCATION=21683
	I1009 18:39:23.044565  141903 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:39:23.045892  141903 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 18:39:23.047281  141903 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 18:39:23.048756  141903 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1009 18:39:23.051722  141903 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 18:39:23.052055  141903 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 18:39:23.077904  141903 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:39:23.078035  141903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:39:23.138942  141903 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-09 18:39:23.12839424 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:39:23.139051  141903 docker.go:319] overlay module found
	I1009 18:39:23.140809  141903 out.go:99] Using the docker driver based on user configuration
	I1009 18:39:23.140842  141903 start.go:309] selected driver: docker
	I1009 18:39:23.140849  141903 start.go:930] validating driver "docker" against <nil>
	I1009 18:39:23.140936  141903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:39:23.204540  141903 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-09 18:39:23.194242841 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:39:23.204703  141903 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 18:39:23.205131  141903 start_flags.go:411] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1009 18:39:23.205289  141903 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:39:23.207356  141903 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-484045 host does not exist
	  To start a cluster, run: "minikube start -p download-only-484045"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-484045
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.47s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-070263 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-070263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-070263
--- PASS: TestDownloadOnlyKic (0.47s)

                                                
                                    
x
+
TestBinaryMirror (0.86s)

                                                
                                                
=== RUN   TestBinaryMirror
I1009 18:39:28.450902  141519 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-721152 --alsologtostderr --binary-mirror http://127.0.0.1:36453 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-721152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-721152
--- PASS: TestBinaryMirror (0.86s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-139298
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-139298: exit status 85 (64.750047ms)

                                                
                                                
-- stdout --
	* Profile "addons-139298" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-139298"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-139298
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-139298: exit status 85 (63.788394ms)

                                                
                                                
-- stdout --
	* Profile "addons-139298" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-139298"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 start --dry-run
--- PASS: TestErrorSpam/start (0.69s)

                                                
                                    
x
+
TestErrorSpam/status (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 status: exit status 6 (299.0499ms)

                                                
                                                
-- stdout --
	nospam-656427
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:56:25.223983  153634 status.go:458] kubeconfig endpoint: get endpoint: "nospam-656427" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 status" failed: exit status 6
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 status: exit status 6 (294.130361ms)

                                                
                                                
-- stdout --
	nospam-656427
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:56:25.518033  153747 status.go:458] kubeconfig endpoint: get endpoint: "nospam-656427" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 status" failed: exit status 6
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 status
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 status: exit status 6 (297.281449ms)

                                                
                                                
-- stdout --
	nospam-656427
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:56:25.815149  153861 status.go:458] kubeconfig endpoint: get endpoint: "nospam-656427" does not appear in /home/jenkins/minikube-integration/21683-137890/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 status" failed: exit status 6
--- PASS: TestErrorSpam/status (0.89s)

                                                
                                    
x
+
TestErrorSpam/pause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 pause
--- PASS: TestErrorSpam/pause (1.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 unpause
--- PASS: TestErrorSpam/unpause (1.37s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 stop: (1.222779232s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-656427 --log_dir /tmp/nospam-656427 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21683-137890/.minikube/files/etc/test/nested/copy/141519/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-158523 cache add registry.k8s.io/pause:3.3: (1.098425761s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-158523 /tmp/TestFunctionalserialCacheCmdcacheadd_local3748262014/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 cache add minikube-local-cache-test:functional-158523
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-158523 cache add minikube-local-cache-test:functional-158523: (1.374954357s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 cache delete minikube-local-cache-test:functional-158523
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-158523
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (282.082147ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 logs
--- PASS: TestFunctional/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 logs --file /tmp/TestFunctionalserialLogsFileCmd3527563803/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 config get cpus: exit status 14 (72.46445ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 config get cpus: exit status 14 (62.655085ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-158523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-158523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (209.790293ms)

                                                
                                                
-- stdout --
	* [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:23:35.808076  182726 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:23:35.808411  182726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:35.808426  182726 out.go:374] Setting ErrFile to fd 2...
	I1009 19:23:35.808432  182726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:35.808725  182726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:23:35.809348  182726 out.go:368] Setting JSON to false
	I1009 19:23:35.810441  182726 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3965,"bootTime":1760033851,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:23:35.810498  182726 start.go:143] virtualization: kvm guest
	I1009 19:23:35.812442  182726 out.go:179] * [functional-158523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:23:35.814051  182726 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:23:35.814151  182726 notify.go:221] Checking for updates...
	I1009 19:23:35.819156  182726 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:23:35.820633  182726 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:23:35.822274  182726 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:23:35.823913  182726 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:23:35.825354  182726 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:23:35.828052  182726 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:23:35.828727  182726 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:23:35.858604  182726 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:23:35.858754  182726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:23:35.944063  182726 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:35.925493735 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:23:35.944234  182726 docker.go:319] overlay module found
	I1009 19:23:35.946706  182726 out.go:179] * Using the docker driver based on existing profile
	I1009 19:23:35.948357  182726 start.go:309] selected driver: docker
	I1009 19:23:35.948390  182726 start.go:930] validating driver "docker" against &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:23:35.948599  182726 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:23:35.950892  182726 out.go:203] 
	W1009 19:23:35.952808  182726 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1009 19:23:35.955433  182726 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-158523 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-158523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-158523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (230.248116ms)

                                                
                                                
-- stdout --
	* [functional-158523] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:23:35.596988  182472 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:23:35.597174  182472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:35.597187  182472 out.go:374] Setting ErrFile to fd 2...
	I1009 19:23:35.597214  182472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:23:35.597758  182472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
	I1009 19:23:35.598466  182472 out.go:368] Setting JSON to false
	I1009 19:23:35.599713  182472 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3965,"bootTime":1760033851,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:23:35.599821  182472 start.go:143] virtualization: kvm guest
	I1009 19:23:35.602141  182472 out.go:179] * [functional-158523] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1009 19:23:35.603702  182472 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:23:35.603747  182472 notify.go:221] Checking for updates...
	I1009 19:23:35.606614  182472 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:23:35.610723  182472 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
	I1009 19:23:35.618490  182472 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
	I1009 19:23:35.620502  182472 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:23:35.622189  182472 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:23:35.624870  182472 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:23:35.625748  182472 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:23:35.662735  182472 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:23:35.662917  182472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:23:35.737929  182472 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:23:35.725502246 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:23:35.738110  182472 docker.go:319] overlay module found
	I1009 19:23:35.740362  182472 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1009 19:23:35.744033  182472 start.go:309] selected driver: docker
	I1009 19:23:35.744066  182472 start.go:930] validating driver "docker" against &{Name:functional-158523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-158523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:23:35.744183  182472 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:23:35.746328  182472 out.go:203] 
	W1009 19:23:35.747587  182472 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 19:23:35.748892  182472 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh -n functional-158523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 cp functional-158523:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3343028733/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh -n functional-158523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh -n functional-158523 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/141519/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "sudo cat /etc/test/nested/copy/141519/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/141519.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "sudo cat /etc/ssl/certs/141519.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/141519.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "sudo cat /usr/share/ca-certificates/141519.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1415192.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "sudo cat /etc/ssl/certs/1415192.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1415192.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "sudo cat /usr/share/ca-certificates/1415192.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 ssh "sudo systemctl is-active docker": exit status 1 (293.870238ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 ssh "sudo systemctl is-active containerd": exit status 1 (279.664595ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-158523 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-158523 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-158523 image ls --format short --alsologtostderr:
I1009 19:23:49.858802  191712 out.go:360] Setting OutFile to fd 1 ...
I1009 19:23:49.859128  191712 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:49.859141  191712 out.go:374] Setting ErrFile to fd 2...
I1009 19:23:49.859147  191712 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:49.859520  191712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
I1009 19:23:49.860397  191712 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:23:49.860511  191712 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:23:49.860922  191712 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
I1009 19:23:49.880062  191712 ssh_runner.go:195] Run: systemctl --version
I1009 19:23:49.880143  191712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
I1009 19:23:49.900800  191712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
I1009 19:23:50.009415  191712 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-158523 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-158523 image ls --format table --alsologtostderr:
I1009 19:23:50.419721  192012 out.go:360] Setting OutFile to fd 1 ...
I1009 19:23:50.420028  192012 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:50.420042  192012 out.go:374] Setting ErrFile to fd 2...
I1009 19:23:50.420046  192012 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:50.420242  192012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
I1009 19:23:50.420877  192012 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:23:50.420987  192012 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:23:50.421425  192012 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
I1009 19:23:50.440884  192012 ssh_runner.go:195] Run: systemctl --version
I1009 19:23:50.440950  192012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
I1009 19:23:50.458418  192012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
I1009 19:23:50.561852  192012 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 19:23:50.929910  141519 retry.go:31] will retry after 9.710854862s: Temporary Error: Get "http:": http: no Host in request URL
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-158523 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"7dd6aaa1717ab7eaae
4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["reg
istry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io
/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-158523 image ls --format json --alsologtostderr:
I1009 19:23:50.175812  191871 out.go:360] Setting OutFile to fd 1 ...
I1009 19:23:50.176098  191871 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:50.176107  191871 out.go:374] Setting ErrFile to fd 2...
I1009 19:23:50.176112  191871 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:50.176330  191871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
I1009 19:23:50.177160  191871 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:23:50.177284  191871 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:23:50.177885  191871 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
I1009 19:23:50.197401  191871 ssh_runner.go:195] Run: systemctl --version
I1009 19:23:50.197473  191871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
I1009 19:23:50.216141  191871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
I1009 19:23:50.318659  191871 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-158523 image ls --format yaml --alsologtostderr:
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-158523 image ls --format yaml --alsologtostderr:
I1009 19:23:49.935066  191768 out.go:360] Setting OutFile to fd 1 ...
I1009 19:23:49.935294  191768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:49.935303  191768 out.go:374] Setting ErrFile to fd 2...
I1009 19:23:49.935306  191768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:49.935491  191768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
I1009 19:23:49.936057  191768 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:23:49.936148  191768 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:23:49.936577  191768 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
I1009 19:23:49.955409  191768 ssh_runner.go:195] Run: systemctl --version
I1009 19:23:49.955517  191768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
I1009 19:23:49.973758  191768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
I1009 19:23:50.078002  191768 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 ssh pgrep buildkitd: exit status 1 (278.965702ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image build -t localhost/my-image:functional-158523 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-158523 image build -t localhost/my-image:functional-158523 testdata/build --alsologtostderr: (3.285954821s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-158523 image build -t localhost/my-image:functional-158523 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> bcca7ddc886
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-158523
--> 4ccb9739e25
Successfully tagged localhost/my-image:functional-158523
4ccb9739e253a0f17f140f26ba5f05a23ac265656356cf31e1fe49fcdace5b20
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-158523 image build -t localhost/my-image:functional-158523 testdata/build --alsologtostderr:
I1009 19:23:50.376922  191992 out.go:360] Setting OutFile to fd 1 ...
I1009 19:23:50.377102  191992 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:50.377115  191992 out.go:374] Setting ErrFile to fd 2...
I1009 19:23:50.377121  191992 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:23:50.377494  191992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
I1009 19:23:50.378442  191992 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:23:50.379284  191992 config.go:182] Loaded profile config "functional-158523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:23:50.379759  191992 cli_runner.go:164] Run: docker container inspect functional-158523 --format={{.State.Status}}
I1009 19:23:50.402469  191992 ssh_runner.go:195] Run: systemctl --version
I1009 19:23:50.402544  191992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-158523
I1009 19:23:50.421512  191992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/functional-158523/id_rsa Username:docker}
I1009 19:23:50.523993  191992 build_images.go:161] Building image from path: /tmp/build.36811687.tar
I1009 19:23:50.524065  191992 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1009 19:23:50.531966  191992 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.36811687.tar
I1009 19:23:50.535992  191992 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.36811687.tar: stat -c "%s %y" /var/lib/minikube/build/build.36811687.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.36811687.tar': No such file or directory
I1009 19:23:50.536034  191992 ssh_runner.go:362] scp /tmp/build.36811687.tar --> /var/lib/minikube/build/build.36811687.tar (3072 bytes)
I1009 19:23:50.553740  191992 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.36811687
I1009 19:23:50.562087  191992 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.36811687 -xf /var/lib/minikube/build/build.36811687.tar
I1009 19:23:50.570821  191992 crio.go:315] Building image: /var/lib/minikube/build/build.36811687
I1009 19:23:50.570893  191992 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-158523 /var/lib/minikube/build/build.36811687 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1009 19:23:53.585311  191992 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-158523 /var/lib/minikube/build/build.36811687 --cgroup-manager=cgroupfs: (3.014383275s)
I1009 19:23:53.585432  191992 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.36811687
I1009 19:23:53.593537  191992 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.36811687.tar
I1009 19:23:53.601327  191992 build_images.go:217] Built localhost/my-image:functional-158523 from /tmp/build.36811687.tar
I1009 19:23:53.601360  191992 build_images.go:133] succeeded building to: functional-158523
I1009 19:23:53.601366  191992 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.514422293s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-158523
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "333.528634ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "55.000458ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "329.140492ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "53.401161ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-158523 /tmp/TestFunctionalparallelMountCmdspecific-port1780211196/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.748075ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 19:23:46.142631  141519 retry.go:31] will retry after 624.416162ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-158523 /tmp/TestFunctionalparallelMountCmdspecific-port1780211196/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 ssh "sudo umount -f /mount-9p": exit status 1 (284.995458ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-158523 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-158523 /tmp/TestFunctionalparallelMountCmdspecific-port1780211196/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image rm kicbase/echo-server:functional-158523 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-158523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup157564093/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-158523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup157564093/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-158523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup157564093/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158523 ssh "findmnt -T" /mount1: exit status 1 (367.020177ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 19:23:48.210595  141519 retry.go:31] will retry after 690.359235ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-158523 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-158523 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-158523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup157564093/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-158523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup157564093/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-158523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup157564093/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-158523 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-158523
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-158523
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-158523
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-487749 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-487749 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.24s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-487749 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-487749 --output=json --user=testUser: (1.238078453s)
--- PASS: TestJSONOutput/stop/Command (1.24s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-415895 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-415895 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (71.756797ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8e98c60e-bafb-4588-87a2-f3be14ce3aa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-415895] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d63d3c0f-463a-43af-8d41-b5a5042ca8bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"beac9009-4871-46dc-a6f5-3c6ad561e296","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f7438493-476f-42b0-9f45-26598812ce05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig"}}
	{"specversion":"1.0","id":"34e4e44b-1343-4e88-bc14-2b697038bddb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube"}}
	{"specversion":"1.0","id":"c4c1522d-77f2-4d55-b8e8-3627cffaf41f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f8eb59ef-dc33-4176-948d-c2b8eb677c10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"680cbc87-5202-4ca0-b260-9da93df13467","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-415895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-415895
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.31s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-419382 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-419382 --network=: (26.120655408s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-419382" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-419382
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-419382: (2.172047063s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.31s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.77s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-755683 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-755683 --network=bridge: (23.778990585s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-755683" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-755683
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-755683: (1.97387519s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.77s)

                                                
                                    
x
+
TestKicExistingNetwork (24.48s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1009 20:00:26.055572  141519 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1009 20:00:26.074525  141519 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1009 20:00:26.074631  141519 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1009 20:00:26.074655  141519 cli_runner.go:164] Run: docker network inspect existing-network
W1009 20:00:26.091465  141519 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1009 20:00:26.091499  141519 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1009 20:00:26.091513  141519 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1009 20:00:26.091668  141519 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1009 20:00:26.109215  141519 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00214f5f0}
I1009 20:00:26.109281  141519 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1009 20:00:26.109337  141519 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1009 20:00:26.171321  141519 network_create.go:108] docker network existing-network 192.168.49.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-345563 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-345563 --network=existing-network: (22.357469976s)
helpers_test.go:175: Cleaning up "existing-network-345563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-345563
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-345563: (1.973455108s)
I1009 20:00:50.520972  141519 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.48s)

                                                
                                    
x
+
TestKicCustomSubnet (25.49s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-616418 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-616418 --subnet=192.168.60.0/24: (23.293856867s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-616418 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-616418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-616418
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-616418: (2.174827694s)
--- PASS: TestKicCustomSubnet (25.49s)

                                                
                                    
x
+
TestKicStaticIP (24.22s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-207010 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-207010 --static-ip=192.168.200.200: (21.943842862s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-207010 ip
helpers_test.go:175: Cleaning up "static-ip-207010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-207010
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-207010: (2.138630214s)
--- PASS: TestKicStaticIP (24.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
E1009 20:01:40.259161  141519 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/functional-158523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-489071 --memory=3072 --mount-string /tmp/TestMountStartserial2288656909/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-489071 --memory=3072 --mount-string /tmp/TestMountStartserial2288656909/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.853318211s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-489071 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-503588 --memory=3072 --mount-string /tmp/TestMountStartserial2288656909/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-503588 --memory=3072 --mount-string /tmp/TestMountStartserial2288656909/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.685929547s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-503588 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-489071 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-489071 --alsologtostderr -v=5: (1.67839094s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-503588 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-503588
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-503588: (1.204186182s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-503588
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-503588: (6.408085836s)
--- PASS: TestMountStart/serial/RestartStopped (7.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-503588 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    

Test skip (18/166)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
Copied to clipboard